Jul 2 01:49:15.972775 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 01:49:15.972793 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 01:49:15.972800 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 01:49:15.972808 kernel: printk: bootconsole [pl11] enabled Jul 2 01:49:15.972813 kernel: efi: EFI v2.70 by EDK II Jul 2 01:49:15.972818 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37b33f98 Jul 2 01:49:15.972824 kernel: random: crng init done Jul 2 01:49:15.972830 kernel: ACPI: Early table checksum verification disabled Jul 2 01:49:15.972836 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 01:49:15.972841 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972846 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972854 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 01:49:15.972859 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972864 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972871 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972876 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972883 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972889 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972895 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 01:49:15.972901 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 01:49:15.972907 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 01:49:15.972912 kernel: NUMA: Failed to initialise from firmware Jul 2 01:49:15.972918 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 01:49:15.972924 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Jul 2 01:49:15.972930 kernel: Zone ranges: Jul 2 01:49:15.972935 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 01:49:15.972941 kernel: DMA32 empty Jul 2 01:49:15.972948 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 01:49:15.972953 kernel: Movable zone start for each node Jul 2 01:49:15.972959 kernel: Early memory node ranges Jul 2 01:49:15.972964 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 01:49:15.972970 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 01:49:15.972975 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 01:49:15.972981 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 01:49:15.972987 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 01:49:15.972992 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 01:49:15.972998 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 01:49:15.973003 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 01:49:15.973009 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 01:49:15.973016 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 01:49:15.973024 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 01:49:15.973030 kernel: psci: probing for conduit method from ACPI. Jul 2 01:49:15.973036 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 01:49:15.973042 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 01:49:15.973049 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 01:49:15.973077 kernel: psci: SMC Calling Convention v1.4 Jul 2 01:49:15.973083 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Jul 2 01:49:15.973089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Jul 2 01:49:15.973096 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 01:49:15.973102 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 01:49:15.973108 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 01:49:15.973114 kernel: Detected PIPT I-cache on CPU0 Jul 2 01:49:15.973120 kernel: CPU features: detected: GIC system register CPU interface Jul 2 01:49:15.973126 kernel: CPU features: detected: Hardware dirty bit management Jul 2 01:49:15.973132 kernel: CPU features: detected: Spectre-BHB Jul 2 01:49:15.973138 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 01:49:15.973146 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 01:49:15.973152 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 01:49:15.973159 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 01:49:15.973192 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 01:49:15.973198 kernel: Policy zone: Normal Jul 2 01:49:15.973206 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 01:49:15.973213 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 01:49:15.973219 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 01:49:15.973225 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 01:49:15.973231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 01:49:15.973239 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Jul 2 01:49:15.973245 kernel: Memory: 3990264K/4194160K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 203896K reserved, 0K cma-reserved) Jul 2 01:49:15.973252 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 01:49:15.973258 kernel: trace event string verifier disabled Jul 2 01:49:15.973264 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 01:49:15.973270 kernel: rcu: RCU event tracing is enabled. Jul 2 01:49:15.973287 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 01:49:15.973293 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 01:49:15.973299 kernel: Tracing variant of Tasks RCU enabled. Jul 2 01:49:15.973305 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 01:49:15.973312 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 01:49:15.973319 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 01:49:15.973325 kernel: GICv3: 960 SPIs implemented Jul 2 01:49:15.973331 kernel: GICv3: 0 Extended SPIs implemented Jul 2 01:49:15.973337 kernel: GICv3: Distributor has no Range Selector support Jul 2 01:49:15.973343 kernel: Root IRQ handler: gic_handle_irq Jul 2 01:49:15.973349 kernel: GICv3: 16 PPIs implemented Jul 2 01:49:15.973355 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 01:49:15.973361 kernel: ITS: No ITS available, not enabling LPIs Jul 2 01:49:15.973367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 01:49:15.973373 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 01:49:15.973379 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 01:49:15.973385 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 01:49:15.973393 kernel: Console: colour dummy device 80x25 Jul 2 01:49:15.973399 kernel: printk: console [tty1] enabled Jul 2 01:49:15.973406 kernel: ACPI: Core revision 20210730 Jul 2 01:49:15.973412 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 01:49:15.973418 kernel: pid_max: default: 32768 minimum: 301 Jul 2 01:49:15.973424 kernel: LSM: Security Framework initializing Jul 2 01:49:15.973431 kernel: SELinux: Initializing. Jul 2 01:49:15.973437 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.973444 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.973451 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 01:49:15.973457 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 01:49:15.973463 kernel: rcu: Hierarchical SRCU implementation. Jul 2 01:49:15.973469 kernel: Remapping and enabling EFI services. Jul 2 01:49:15.973475 kernel: smp: Bringing up secondary CPUs ... Jul 2 01:49:15.973481 kernel: Detected PIPT I-cache on CPU1 Jul 2 01:49:15.973488 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 01:49:15.973494 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 01:49:15.973507 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 01:49:15.973515 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 01:49:15.973522 kernel: SMP: Total of 2 processors activated. Jul 2 01:49:15.973528 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 01:49:15.973534 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 01:49:15.973540 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 01:49:15.973547 kernel: CPU features: detected: CRC32 instructions Jul 2 01:49:15.973553 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 01:49:15.973559 kernel: CPU features: detected: LSE atomic instructions Jul 2 01:49:15.973565 kernel: CPU features: detected: Privileged Access Never Jul 2 01:49:15.973573 kernel: CPU: All CPU(s) started at EL1 Jul 2 01:49:15.973579 kernel: alternatives: patching kernel code Jul 2 01:49:15.973590 kernel: devtmpfs: initialized Jul 2 01:49:15.973598 kernel: KASLR enabled Jul 2 01:49:15.973605 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 01:49:15.973611 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 01:49:15.973618 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 01:49:15.973624 kernel: SMBIOS 3.1.0 present. Jul 2 01:49:15.973631 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 01:49:15.973638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 01:49:15.973646 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 01:49:15.973665 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 01:49:15.973672 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 01:49:15.973678 kernel: audit: initializing netlink subsys (disabled) Jul 2 01:49:15.973685 kernel: audit: type=2000 audit(0.090:1): state=initialized audit_enabled=0 res=1 Jul 2 01:49:15.973692 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 01:49:15.973698 kernel: cpuidle: using governor menu Jul 2 01:49:15.973706 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 01:49:15.973713 kernel: ASID allocator initialised with 32768 entries Jul 2 01:49:15.973719 kernel: ACPI: bus type PCI registered Jul 2 01:49:15.973726 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 01:49:15.973733 kernel: Serial: AMBA PL011 UART driver Jul 2 01:49:15.973739 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 01:49:15.973746 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 01:49:15.973752 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 01:49:15.973759 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 01:49:15.973767 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 01:49:15.973773 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 01:49:15.973779 kernel: ACPI: Added _OSI(Module Device) Jul 2 01:49:15.973786 kernel: ACPI: Added _OSI(Processor Device) Jul 2 01:49:15.973792 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 01:49:15.973799 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 01:49:15.973805 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 01:49:15.973812 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 01:49:15.973819 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 01:49:15.973834 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 01:49:15.973840 kernel: ACPI: Interpreter enabled Jul 2 01:49:15.973847 kernel: ACPI: Using GIC for interrupt routing Jul 2 01:49:15.973853 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 01:49:15.973860 kernel: printk: console [ttyAMA0] enabled Jul 2 01:49:15.973866 kernel: printk: bootconsole [pl11] disabled Jul 2 01:49:15.973873 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 01:49:15.973879 kernel: iommu: Default domain type: Translated Jul 2 01:49:15.973886 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 01:49:15.973894 kernel: vgaarb: loaded Jul 2 01:49:15.973900 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 01:49:15.973907 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jul 2 01:49:15.973913 kernel: PTP clock support registered Jul 2 01:49:15.973920 kernel: Registered efivars operations Jul 2 01:49:15.973926 kernel: No ACPI PMU IRQ for CPU0 Jul 2 01:49:15.973932 kernel: No ACPI PMU IRQ for CPU1 Jul 2 01:49:15.973939 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 01:49:15.973945 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 01:49:15.973953 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 01:49:15.973959 kernel: pnp: PnP ACPI init Jul 2 01:49:15.973966 kernel: pnp: PnP ACPI: found 0 devices Jul 2 01:49:15.973972 kernel: NET: Registered PF_INET protocol family Jul 2 01:49:15.973979 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 01:49:15.973986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 01:49:15.973992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 01:49:15.973999 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 01:49:15.974006 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 01:49:15.974014 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 01:49:15.974020 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.974027 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 01:49:15.974033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 01:49:15.974040 kernel: PCI: CLS 0 bytes, default 64 Jul 2 01:49:15.974046 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 01:49:15.974082 kernel: kvm [1]: HYP mode not available Jul 2 01:49:15.974089 kernel: Initialise system trusted keyrings Jul 2 01:49:15.974096 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 01:49:15.974104 kernel: Key type asymmetric registered Jul 2 01:49:15.974110 kernel: Asymmetric key parser 'x509' registered Jul 2 01:49:15.974117 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 01:49:15.974123 kernel: io scheduler mq-deadline registered Jul 2 01:49:15.974129 kernel: io scheduler kyber registered Jul 2 01:49:15.974136 kernel: io scheduler bfq registered Jul 2 01:49:15.974142 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 01:49:15.974149 kernel: thunder_xcv, ver 1.0 Jul 2 01:49:15.974155 kernel: thunder_bgx, ver 1.0 Jul 2 01:49:15.974163 kernel: nicpf, ver 1.0 Jul 2 01:49:15.974169 kernel: nicvf, ver 1.0 Jul 2 01:49:15.974284 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 01:49:15.974347 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T01:49:15 UTC (1719884955) Jul 2 01:49:15.974356 kernel: efifb: probing for efifb Jul 2 01:49:15.974363 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 01:49:15.974370 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 01:49:15.974376 kernel: efifb: scrolling: redraw Jul 2 01:49:15.974384 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 01:49:15.974391 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 01:49:15.974398 kernel: fb0: EFI VGA frame buffer device Jul 2 01:49:15.974404 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 01:49:15.974411 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 01:49:15.974418 kernel: NET: Registered PF_INET6 protocol family Jul 2 01:49:15.974424 kernel: Segment Routing with IPv6 Jul 2 01:49:15.974430 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 01:49:15.974437 kernel: NET: Registered PF_PACKET protocol family Jul 2 01:49:15.974445 kernel: Key type dns_resolver registered Jul 2 01:49:15.974451 kernel: registered taskstats version 1 Jul 2 01:49:15.974457 kernel: Loading compiled-in X.509 certificates Jul 2 01:49:15.974464 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 01:49:15.974471 kernel: Key type .fscrypt registered Jul 2 01:49:15.974477 kernel: Key type fscrypt-provisioning registered Jul 2 01:49:15.974484 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 01:49:15.974490 kernel: ima: Allocated hash algorithm: sha1 Jul 2 01:49:15.974497 kernel: ima: No architecture policies found Jul 2 01:49:15.974504 kernel: clk: Disabling unused clocks Jul 2 01:49:15.974511 kernel: Freeing unused kernel memory: 36352K Jul 2 01:49:15.974517 kernel: Run /init as init process Jul 2 01:49:15.974524 kernel: with arguments: Jul 2 01:49:15.974530 kernel: /init Jul 2 01:49:15.974536 kernel: with environment: Jul 2 01:49:15.974543 kernel: HOME=/ Jul 2 01:49:15.974550 kernel: TERM=linux Jul 2 01:49:15.974556 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 01:49:15.974566 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 01:49:15.974575 systemd[1]: Detected virtualization microsoft. Jul 2 01:49:15.974582 systemd[1]: Detected architecture arm64. Jul 2 01:49:15.974588 systemd[1]: Running in initrd. Jul 2 01:49:15.974595 systemd[1]: No hostname configured, using default hostname. Jul 2 01:49:15.974602 systemd[1]: Hostname set to <localhost>. Jul 2 01:49:15.974609 systemd[1]: Initializing machine ID from random generator. Jul 2 01:49:15.974618 systemd[1]: Queued start job for default target initrd.target. Jul 2 01:49:15.974625 systemd[1]: Started systemd-ask-password-console.path. Jul 2 01:49:15.974631 systemd[1]: Reached target cryptsetup.target. Jul 2 01:49:15.974638 systemd[1]: Reached target paths.target. Jul 2 01:49:15.974645 systemd[1]: Reached target slices.target. Jul 2 01:49:15.974652 systemd[1]: Reached target swap.target. Jul 2 01:49:15.974659 systemd[1]: Reached target timers.target. Jul 2 01:49:15.974666 systemd[1]: Listening on iscsid.socket. Jul 2 01:49:15.974674 systemd[1]: Listening on iscsiuio.socket. Jul 2 01:49:15.974681 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 01:49:15.974688 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 01:49:15.974695 systemd[1]: Listening on systemd-journald.socket. Jul 2 01:49:15.974702 systemd[1]: Listening on systemd-networkd.socket. Jul 2 01:49:15.974710 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 01:49:15.974716 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 01:49:15.974723 systemd[1]: Reached target sockets.target. Jul 2 01:49:15.974730 systemd[1]: Starting kmod-static-nodes.service... Jul 2 01:49:15.974738 systemd[1]: Finished network-cleanup.service. Jul 2 01:49:15.974745 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 01:49:15.974752 systemd[1]: Starting systemd-journald.service... Jul 2 01:49:15.974759 systemd[1]: Starting systemd-modules-load.service... Jul 2 01:49:15.974766 systemd[1]: Starting systemd-resolved.service... Jul 2 01:49:15.974776 systemd-journald[276]: Journal started Jul 2 01:49:15.974813 systemd-journald[276]: Runtime Journal (/run/log/journal/3eeb0fcacf01417b85eacd5a78c98a97) is 8.0M, max 78.6M, 70.6M free. Jul 2 01:49:15.968827 systemd-modules-load[277]: Inserted module 'overlay' Jul 2 01:49:15.997624 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 01:49:16.006229 systemd-resolved[278]: Positive Trust Anchors: Jul 2 01:49:16.017042 systemd[1]: Started systemd-journald.service. Jul 2 01:49:16.017074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 01:49:16.006237 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 01:49:16.030265 kernel: Bridge firewalling registered Jul 2 01:49:16.006267 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 01:49:16.077588 kernel: audit: type=1130 audit(1719884956.059:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.077608 kernel: SCSI subsystem initialized Jul 2 01:49:16.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.008285 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 2 01:49:16.111745 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 01:49:16.111767 kernel: audit: type=1130 audit(1719884956.085:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.111783 kernel: device-mapper: uevent: version 1.0.3 Jul 2 01:49:16.111792 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 01:49:16.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.030700 systemd-modules-load[277]: Inserted module 'br_netfilter' Jul 2 01:49:16.060188 systemd[1]: Started systemd-resolved.service. Jul 2 01:49:16.162141 kernel: audit: type=1130 audit(1719884956.125:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.162165 kernel: audit: type=1130 audit(1719884956.146:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.085983 systemd[1]: Finished kmod-static-nodes.service. Jul 2 01:49:16.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.123139 systemd-modules-load[277]: Inserted module 'dm_multipath' Jul 2 01:49:16.126242 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 01:49:16.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.147236 systemd[1]: Finished systemd-modules-load.service. Jul 2 01:49:16.227843 kernel: audit: type=1130 audit(1719884956.166:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.227868 kernel: audit: type=1130 audit(1719884956.188:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.166464 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 01:49:16.188507 systemd[1]: Reached target nss-lookup.target. Jul 2 01:49:16.214217 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 01:49:16.220204 systemd[1]: Starting systemd-sysctl.service... Jul 2 01:49:16.270662 kernel: audit: type=1130 audit(1719884956.253:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.233718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 01:49:16.294423 kernel: audit: type=1130 audit(1719884956.275:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.248571 systemd[1]: Finished systemd-sysctl.service. Jul 2 01:49:16.269183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 01:49:16.298333 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 01:49:16.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.335546 systemd[1]: Starting dracut-cmdline.service... Jul 2 01:49:16.344467 kernel: audit: type=1130 audit(1719884956.306:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.350663 dracut-cmdline[299]: dracut-dracut-053 Jul 2 01:49:16.355444 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 01:49:16.441075 kernel: Loading iSCSI transport class v2.0-870. Jul 2 01:49:16.456079 kernel: iscsi: registered transport (tcp) Jul 2 01:49:16.475677 kernel: iscsi: registered transport (qla4xxx) Jul 2 01:49:16.475693 kernel: QLogic iSCSI HBA Driver Jul 2 01:49:16.505821 systemd[1]: Finished dracut-cmdline.service. Jul 2 01:49:16.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.511037 systemd[1]: Starting dracut-pre-udev.service... Jul 2 01:49:16.562070 kernel: raid6: neonx8 gen() 13826 MB/s Jul 2 01:49:16.582062 kernel: raid6: neonx8 xor() 10832 MB/s Jul 2 01:49:16.602063 kernel: raid6: neonx4 gen() 13533 MB/s Jul 2 01:49:16.623063 kernel: raid6: neonx4 xor() 11299 MB/s Jul 2 01:49:16.643062 kernel: raid6: neonx2 gen() 12959 MB/s Jul 2 01:49:16.663065 kernel: raid6: neonx2 xor() 10371 MB/s Jul 2 01:49:16.684062 kernel: raid6: neonx1 gen() 10542 MB/s Jul 2 01:49:16.704061 kernel: raid6: neonx1 xor() 8798 MB/s Jul 2 01:49:16.724064 kernel: raid6: int64x8 gen() 6272 MB/s Jul 2 01:49:16.745085 kernel: raid6: int64x8 xor() 3544 MB/s Jul 2 01:49:16.766090 kernel: raid6: int64x4 gen() 7229 MB/s Jul 2 01:49:16.786072 kernel: raid6: int64x4 xor() 3844 MB/s Jul 2 01:49:16.807066 kernel: raid6: int64x2 gen() 6153 MB/s Jul 2 01:49:16.828068 kernel: raid6: int64x2 xor() 3322 MB/s Jul 2 01:49:16.848062 kernel: raid6: int64x1 gen() 5047 MB/s Jul 2 01:49:16.873934 kernel: raid6: int64x1 xor() 2647 MB/s Jul 2 01:49:16.873954 kernel: raid6: using algorithm neonx8 gen() 13826 MB/s Jul 2 01:49:16.873971 kernel: raid6: .... xor() 10832 MB/s, rmw enabled Jul 2 01:49:16.878308 kernel: raid6: using neon recovery algorithm Jul 2 01:49:16.896069 kernel: xor: measuring software checksum speed Jul 2 01:49:16.900064 kernel: 8regs : 17311 MB/sec Jul 2 01:49:16.908449 kernel: 32regs : 20739 MB/sec Jul 2 01:49:16.908459 kernel: arm64_neon : 27930 MB/sec Jul 2 01:49:16.908470 kernel: xor: using function: arm64_neon (27930 MB/sec) Jul 2 01:49:16.970077 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 01:49:16.980080 systemd[1]: Finished dracut-pre-udev.service. Jul 2 01:49:16.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:16.988000 audit: BPF prog-id=7 op=LOAD Jul 2 01:49:16.988000 audit: BPF prog-id=8 op=LOAD Jul 2 01:49:16.989505 systemd[1]: Starting systemd-udevd.service... Jul 2 01:49:17.007506 systemd-udevd[475]: Using default interface naming scheme 'v252'. Jul 2 01:49:17.014163 systemd[1]: Started systemd-udevd.service. Jul 2 01:49:17.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:17.025220 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 01:49:17.040330 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Jul 2 01:49:17.070255 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 01:49:17.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:17.075320 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 01:49:17.109484 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 01:49:17.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:17.168176 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 01:49:17.194661 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 01:49:17.194714 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 01:49:17.194724 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 01:49:17.194735 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 01:49:17.213863 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 01:49:17.214041 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 01:49:17.222080 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 01:49:17.226004 kernel: scsi host0: storvsc_host_t Jul 2 01:49:17.235765 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 01:49:17.235840 kernel: scsi host1: storvsc_host_t Jul 2 01:49:17.243099 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 01:49:17.264419 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 01:49:17.264602 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 01:49:17.266079 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 01:49:17.275408 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 01:49:17.275544 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 01:49:17.280071 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 01:49:17.280278 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 01:49:17.287068 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 01:49:17.295079 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:17.302073 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 01:49:17.311078 kernel: hv_netvsc 0022487c-da80-0022-487c-da800022487c eth0: VF slot 1 added Jul 2 01:49:17.321080 kernel: hv_vmbus: registering driver hv_pci Jul 2 01:49:17.333889 kernel: hv_pci 693968d4-6eb8-419e-9eea-aa3088293344: PCI VMBus probing: Using version 0x10004 Jul 2 01:49:17.334094 kernel: hv_pci 693968d4-6eb8-419e-9eea-aa3088293344: PCI host bridge to bus 6eb8:00 Jul 2 01:49:17.345552 kernel: pci_bus 6eb8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 01:49:17.351709 kernel: pci_bus 6eb8:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 01:49:17.358366 kernel: pci 6eb8:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 01:49:17.372341 kernel: pci 6eb8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 01:49:17.392084 kernel: pci 6eb8:00:02.0: enabling Extended Tags Jul 2 01:49:17.418504 kernel: pci 6eb8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6eb8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 01:49:17.430798 kernel: pci_bus 6eb8:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 01:49:17.430931 kernel: pci 6eb8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 01:49:17.470080 kernel: mlx5_core 6eb8:00:02.0: firmware version: 16.30.1284 Jul 2 01:49:17.624083 kernel: mlx5_core 6eb8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jul 2 01:49:17.659715 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 01:49:17.692755 kernel: hv_netvsc 0022487c-da80-0022-487c-da800022487c eth0: VF registering: eth1 Jul 2 01:49:17.692937 kernel: mlx5_core 6eb8:00:02.0 eth1: joined to eth0 Jul 2 01:49:17.704082 kernel: mlx5_core 6eb8:00:02.0 enP28344s1: renamed from eth1 Jul 2 01:49:17.723183 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (539) Jul 2 01:49:17.736137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 01:49:17.867262 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 01:49:17.872756 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 01:49:17.879320 systemd[1]: Starting disk-uuid.service... Jul 2 01:49:17.894890 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 01:49:17.916072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:17.925077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:18.934075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 01:49:18.934883 disk-uuid[603]: The operation has completed successfully. Jul 2 01:49:19.003397 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 01:49:19.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.003490 systemd[1]: Finished disk-uuid.service. Jul 2 01:49:19.008522 systemd[1]: Starting verity-setup.service... Jul 2 01:49:19.045071 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 01:49:19.301853 systemd[1]: Found device dev-mapper-usr.device. Jul 2 01:49:19.307660 systemd[1]: Mounting sysusr-usr.mount... Jul 2 01:49:19.316065 systemd[1]: Finished verity-setup.service. Jul 2 01:49:19.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.373069 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 01:49:19.373485 systemd[1]: Mounted sysusr-usr.mount. Jul 2 01:49:19.377130 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 01:49:19.377814 systemd[1]: Starting ignition-setup.service... Jul 2 01:49:19.395633 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 01:49:19.422200 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 01:49:19.422256 kernel: BTRFS info (device sda6): using free space tree Jul 2 01:49:19.428488 kernel: BTRFS info (device sda6): has skinny extents Jul 2 01:49:19.478755 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 01:49:19.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.487000 audit: BPF prog-id=9 op=LOAD Jul 2 01:49:19.487679 systemd[1]: Starting systemd-networkd.service... Jul 2 01:49:19.513140 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 01:49:19.513243 systemd-networkd[844]: lo: Link UP Jul 2 01:49:19.513246 systemd-networkd[844]: lo: Gained carrier Jul 2 01:49:19.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.513633 systemd-networkd[844]: Enumeration completed Jul 2 01:49:19.517803 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 01:49:19.520526 systemd[1]: Started systemd-networkd.service. Jul 2 01:49:19.525332 systemd[1]: Reached target network.target. Jul 2 01:49:19.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.533313 systemd[1]: Starting iscsiuio.service... Jul 2 01:49:19.564600 iscsid[853]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 01:49:19.564600 iscsid[853]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 01:49:19.564600 iscsid[853]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Jul 2 01:49:19.564600 iscsid[853]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 01:49:19.564600 iscsid[853]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 01:49:19.564600 iscsid[853]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 01:49:19.564600 iscsid[853]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 01:49:19.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.546449 systemd[1]: Started iscsiuio.service. Jul 2 01:49:19.554855 systemd[1]: Starting iscsid.service... Jul 2 01:49:19.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.564533 systemd[1]: Started iscsid.service. Jul 2 01:49:19.568933 systemd[1]: Starting dracut-initqueue.service... Jul 2 01:49:19.595289 systemd[1]: Finished dracut-initqueue.service. Jul 2 01:49:19.599450 systemd[1]: Reached target remote-fs-pre.target. Jul 2 01:49:19.617474 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 01:49:19.626584 systemd[1]: Reached target remote-fs.target. Jul 2 01:49:19.636996 systemd[1]: Starting dracut-pre-mount.service... Jul 2 01:49:19.654654 systemd[1]: Finished dracut-pre-mount.service. Jul 2 01:49:19.697389 systemd[1]: Finished ignition-setup.service. Jul 2 01:49:19.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:19.702384 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 01:49:19.729069 kernel: mlx5_core 6eb8:00:02.0 enP28344s1: Link up Jul 2 01:49:19.774156 kernel: hv_netvsc 0022487c-da80-0022-487c-da800022487c eth0: Data path switched to VF: enP28344s1 Jul 2 01:49:19.774309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 01:49:19.779829 systemd-networkd[844]: enP28344s1: Link UP Jul 2 01:49:19.779907 systemd-networkd[844]: eth0: Link UP Jul 2 01:49:19.780029 systemd-networkd[844]: eth0: Gained carrier Jul 2 01:49:19.792303 systemd-networkd[844]: enP28344s1: Gained carrier Jul 2 01:49:19.803131 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 01:49:20.902234 systemd-networkd[844]: eth0: Gained IPv6LL Jul 2 01:49:22.307455 ignition[868]: Ignition 2.14.0 Jul 2 01:49:22.307467 ignition[868]: Stage: fetch-offline Jul 2 01:49:22.307521 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:22.307549 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:22.341758 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:22.341898 ignition[868]: parsed url from cmdline: "" Jul 2 01:49:22.341902 ignition[868]: no config URL provided Jul 2 01:49:22.341907 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 01:49:22.390616 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 01:49:22.390653 kernel: audit: type=1130 audit(1719884962.362:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.354246 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 01:49:22.341916 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jul 2 01:49:22.363033 systemd[1]: Starting ignition-fetch.service... Jul 2 01:49:22.341921 ignition[868]: failed to fetch config: resource requires networking Jul 2 01:49:22.342161 ignition[868]: Ignition finished successfully Jul 2 01:49:22.394030 ignition[874]: Ignition 2.14.0 Jul 2 01:49:22.394035 ignition[874]: Stage: fetch Jul 2 01:49:22.394148 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:22.394165 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:22.405913 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:22.406040 ignition[874]: parsed url from cmdline: "" Jul 2 01:49:22.406043 ignition[874]: no config URL provided Jul 2 01:49:22.406048 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 01:49:22.406067 ignition[874]: no config at "/usr/lib/ignition/user.ign" Jul 2 01:49:22.406098 ignition[874]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 01:49:22.505593 ignition[874]: GET result: OK Jul 2 01:49:22.505663 ignition[874]: config has been read from IMDS userdata Jul 2 01:49:22.508685 unknown[874]: fetched base config from "system" Jul 2 01:49:22.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.505704 ignition[874]: parsing config with SHA512: 09516b0af25753fabe5e2802ba394dae8d2b6ec1c27615e9781f384f4574a090303fd7418382da068657c58b2a9dead67cc0e0f5828908e92ba5d887a244fabe Jul 2 01:49:22.508692 unknown[874]: fetched base config from "system" Jul 2 01:49:22.544706 kernel: audit: type=1130 audit(1719884962.517:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.509217 ignition[874]: fetch: fetch complete Jul 2 01:49:22.508697 unknown[874]: fetched user config from "azure" Jul 2 01:49:22.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.509222 ignition[874]: fetch: fetch passed Jul 2 01:49:22.573514 kernel: audit: type=1130 audit(1719884962.555:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.513931 systemd[1]: Finished ignition-fetch.service. Jul 2 01:49:22.509262 ignition[874]: Ignition finished successfully Jul 2 01:49:22.535574 systemd[1]: Starting ignition-kargs.service... Jul 2 01:49:22.543319 ignition[880]: Ignition 2.14.0 Jul 2 01:49:22.552108 systemd[1]: Finished ignition-kargs.service. Jul 2 01:49:22.543326 ignition[880]: Stage: kargs Jul 2 01:49:22.585261 systemd[1]: Starting ignition-disks.service... Jul 2 01:49:22.543437 ignition[880]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:22.543456 ignition[880]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:22.607820 systemd[1]: Finished ignition-disks.service. Jul 2 01:49:22.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.546111 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:22.643869 kernel: audit: type=1130 audit(1719884962.611:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.549326 ignition[880]: kargs: kargs passed Jul 2 01:49:22.629209 systemd[1]: Reached target initrd-root-device.target. Jul 2 01:49:22.549385 ignition[880]: Ignition finished successfully Jul 2 01:49:22.634625 systemd[1]: Reached target local-fs-pre.target. Jul 2 01:49:22.600302 ignition[886]: Ignition 2.14.0 Jul 2 01:49:22.642115 systemd[1]: Reached target local-fs.target. Jul 2 01:49:22.600308 ignition[886]: Stage: disks Jul 2 01:49:22.649220 systemd[1]: Reached target sysinit.target. Jul 2 01:49:22.600422 ignition[886]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:22.655851 systemd[1]: Reached target basic.target. Jul 2 01:49:22.600440 ignition[886]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:22.667001 systemd[1]: Starting systemd-fsck-root.service... Jul 2 01:49:22.603361 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:22.606156 ignition[886]: disks: disks passed Jul 2 01:49:22.606213 ignition[886]: Ignition finished successfully Jul 2 01:49:22.727833 systemd-fsck[894]: ROOT: clean, 614/7326000 files, 481075/7359488 blocks Jul 2 01:49:22.731688 systemd[1]: Finished systemd-fsck-root.service. Jul 2 01:49:22.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.769697 systemd[1]: Mounting sysroot.mount... Jul 2 01:49:22.778507 kernel: audit: type=1130 audit(1719884962.744:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:22.793084 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 01:49:22.793478 systemd[1]: Mounted sysroot.mount. Jul 2 01:49:22.801093 systemd[1]: Reached target initrd-root-fs.target. Jul 2 01:49:22.844263 systemd[1]: Mounting sysroot-usr.mount... Jul 2 01:49:22.849079 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 01:49:22.862065 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 01:49:22.862118 systemd[1]: Reached target ignition-diskful.target. Jul 2 01:49:22.878751 systemd[1]: Mounted sysroot-usr.mount. Jul 2 01:49:22.931392 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 01:49:22.936967 systemd[1]: Starting initrd-setup-root.service... Jul 2 01:49:22.962083 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (905) Jul 2 01:49:22.969943 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 01:49:22.986377 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 01:49:22.986405 kernel: BTRFS info (device sda6): using free space tree Jul 2 01:49:22.986414 kernel: BTRFS info (device sda6): has skinny extents Jul 2 01:49:22.992129 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 01:49:23.004812 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Jul 2 01:49:23.029090 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 01:49:23.039407 initrd-setup-root[952]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 01:49:23.490782 systemd[1]: Finished initrd-setup-root.service. Jul 2 01:49:23.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.496500 systemd[1]: Starting ignition-mount.service... Jul 2 01:49:23.528457 kernel: audit: type=1130 audit(1719884963.495:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.525979 systemd[1]: Starting sysroot-boot.service... Jul 2 01:49:23.533311 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 01:49:23.533448 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 01:49:23.564155 systemd[1]: Finished sysroot-boot.service. Jul 2 01:49:23.573742 ignition[973]: INFO : Ignition 2.14.0 Jul 2 01:49:23.573742 ignition[973]: INFO : Stage: mount Jul 2 01:49:23.573742 ignition[973]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:23.573742 ignition[973]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:23.573742 ignition[973]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:23.573742 ignition[973]: INFO : mount: mount passed Jul 2 01:49:23.573742 ignition[973]: INFO : Ignition finished successfully Jul 2 01:49:23.659497 kernel: audit: type=1130 audit(1719884963.573:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.659523 kernel: audit: type=1130 audit(1719884963.599:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:23.574357 systemd[1]: Finished ignition-mount.service. Jul 2 01:49:24.038003 coreos-metadata[904]: Jul 02 01:49:24.037 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 01:49:24.047820 coreos-metadata[904]: Jul 02 01:49:24.047 INFO Fetch successful Jul 2 01:49:24.083912 coreos-metadata[904]: Jul 02 01:49:24.083 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 01:49:24.108068 coreos-metadata[904]: Jul 02 01:49:24.108 INFO Fetch successful Jul 2 01:49:24.126297 coreos-metadata[904]: Jul 02 01:49:24.126 INFO wrote hostname ci-3510.3.5-a-267983ca13 to /sysroot/etc/hostname Jul 2 01:49:24.136271 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 01:49:24.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:24.168047 kernel: audit: type=1130 audit(1719884964.141:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:24.163555 systemd[1]: Starting ignition-files.service... Jul 2 01:49:24.174840 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 01:49:24.195073 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (983) Jul 2 01:49:24.207134 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 01:49:24.207160 kernel: BTRFS info (device sda6): using free space tree Jul 2 01:49:24.207170 kernel: BTRFS info (device sda6): has skinny extents Jul 2 01:49:24.218884 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 01:49:24.235376 ignition[1002]: INFO : Ignition 2.14.0 Jul 2 01:49:24.241610 ignition[1002]: INFO : Stage: files Jul 2 01:49:24.241610 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:24.241610 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:24.267277 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:24.274933 ignition[1002]: DEBUG : files: compiled without relabeling support, skipping Jul 2 01:49:24.274933 ignition[1002]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 01:49:24.274933 ignition[1002]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 01:49:24.338390 ignition[1002]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 01:49:24.346190 ignition[1002]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 01:49:24.346190 ignition[1002]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 01:49:24.346190 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 01:49:24.346190 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 01:49:24.338940 unknown[1002]: wrote ssh authorized keys file for user: core Jul 2 01:49:24.611551 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 01:49:24.831834 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 01:49:24.841937 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 01:49:24.841937 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 01:49:25.226386 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 01:49:25.297853 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 01:49:25.308574 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 01:49:25.457654 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1004) Jul 2 01:49:25.363772 systemd[1]: mnt-oem3592796052.mount: Deactivated successfully. Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3592796052" Jul 2 01:49:25.462315 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3592796052": device or resource busy Jul 2 01:49:25.462315 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3592796052", trying btrfs: device or resource busy Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3592796052" Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3592796052" Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3592796052" Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3592796052" Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2967585540" Jul 2 01:49:25.462315 ignition[1002]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2967585540": device or resource busy Jul 2 01:49:25.462315 ignition[1002]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2967585540", trying btrfs: device or resource busy Jul 2 01:49:25.462315 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2967585540" Jul 2 01:49:25.400225 systemd[1]: mnt-oem2967585540.mount: Deactivated successfully. Jul 2 01:49:25.608604 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2967585540" Jul 2 01:49:25.608604 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem2967585540" Jul 2 01:49:25.608604 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem2967585540" Jul 2 01:49:25.608604 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 01:49:25.608604 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:25.608604 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 01:49:25.770398 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Jul 2 01:49:25.999837 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 01:49:25.999837 ignition[1002]: INFO : files: op(14): [started] processing unit "waagent.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(14): [finished] processing unit "waagent.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(15): [started] processing unit "nvidia.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(15): [finished] processing unit "nvidia.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 01:49:26.019457 ignition[1002]: INFO : files: files passed Jul 2 01:49:26.019457 ignition[1002]: INFO : Ignition finished successfully Jul 2 01:49:26.207930 kernel: audit: type=1130 audit(1719884966.023:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.014299 systemd[1]: Finished ignition-files.service. Jul 2 01:49:26.024314 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 01:49:26.048547 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 01:49:26.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.236981 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 01:49:26.055657 systemd[1]: Starting ignition-quench.service... Jul 2 01:49:26.068792 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 01:49:26.068895 systemd[1]: Finished ignition-quench.service. Jul 2 01:49:26.079692 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 01:49:26.094777 systemd[1]: Reached target ignition-complete.target. Jul 2 01:49:26.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.110202 systemd[1]: Starting initrd-parse-etc.service... Jul 2 01:49:26.139039 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 01:49:26.139151 systemd[1]: Finished initrd-parse-etc.service. Jul 2 01:49:26.148184 systemd[1]: Reached target initrd-fs.target. Jul 2 01:49:26.158701 systemd[1]: Reached target initrd.target. Jul 2 01:49:26.169361 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 01:49:26.170120 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 01:49:26.217672 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 01:49:26.222722 systemd[1]: Starting initrd-cleanup.service... Jul 2 01:49:26.247483 systemd[1]: Stopped target nss-lookup.target. Jul 2 01:49:26.253090 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 01:49:26.262213 systemd[1]: Stopped target timers.target. Jul 2 01:49:26.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.269510 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 01:49:26.269611 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 01:49:26.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.278381 systemd[1]: Stopped target initrd.target. Jul 2 01:49:26.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.285994 systemd[1]: Stopped target basic.target. Jul 2 01:49:26.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.293466 systemd[1]: Stopped target ignition-complete.target. Jul 2 01:49:26.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.302405 systemd[1]: Stopped target ignition-diskful.target. Jul 2 01:49:26.310533 systemd[1]: Stopped target initrd-root-device.target. Jul 2 01:49:26.318520 systemd[1]: Stopped target remote-fs.target. Jul 2 01:49:26.454267 ignition[1040]: INFO : Ignition 2.14.0 Jul 2 01:49:26.454267 ignition[1040]: INFO : Stage: umount Jul 2 01:49:26.454267 ignition[1040]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 01:49:26.454267 ignition[1040]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 01:49:26.454267 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 01:49:26.454267 ignition[1040]: INFO : umount: umount passed Jul 2 01:49:26.454267 ignition[1040]: INFO : Ignition finished successfully Jul 2 01:49:26.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.517586 iscsid[853]: iscsid shutting down. Jul 2 01:49:26.326472 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 01:49:26.337001 systemd[1]: Stopped target sysinit.target. Jul 2 01:49:26.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.346192 systemd[1]: Stopped target local-fs.target. Jul 2 01:49:26.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.353571 systemd[1]: Stopped target local-fs-pre.target. Jul 2 01:49:26.361701 systemd[1]: Stopped target swap.target. Jul 2 01:49:26.368960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 01:49:26.369088 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 01:49:26.380953 systemd[1]: Stopped target cryptsetup.target. Jul 2 01:49:26.389253 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 01:49:26.389348 systemd[1]: Stopped dracut-initqueue.service. Jul 2 01:49:26.397850 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 01:49:26.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.397943 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 01:49:26.406544 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 01:49:26.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.406629 systemd[1]: Stopped ignition-files.service. Jul 2 01:49:26.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.413784 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 01:49:26.413875 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 01:49:26.422827 systemd[1]: Stopping ignition-mount.service... Jul 2 01:49:26.440473 systemd[1]: Stopping iscsid.service... Jul 2 01:49:26.445039 systemd[1]: Stopping sysroot-boot.service... Jul 2 01:49:26.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.448764 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 01:49:26.448963 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 01:49:26.457335 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 01:49:26.457474 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 01:49:26.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.466631 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 01:49:26.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.466738 systemd[1]: Stopped iscsid.service. Jul 2 01:49:26.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.478999 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 01:49:26.479122 systemd[1]: Stopped ignition-mount.service. Jul 2 01:49:26.496935 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 01:49:26.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.497005 systemd[1]: Stopped ignition-disks.service. Jul 2 01:49:26.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.505471 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 01:49:26.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.505521 systemd[1]: Stopped ignition-kargs.service. Jul 2 01:49:26.775000 audit: BPF prog-id=6 op=UNLOAD Jul 2 01:49:26.513137 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 01:49:26.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.513177 systemd[1]: Stopped ignition-fetch.service. Jul 2 01:49:26.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.528614 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 01:49:26.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.528664 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 01:49:26.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.537440 systemd[1]: Stopped target paths.target. Jul 2 01:49:26.844367 kernel: hv_netvsc 0022487c-da80-0022-487c-da800022487c eth0: Data path switched from VF: enP28344s1 Jul 2 01:49:26.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.545313 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 01:49:26.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.553278 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 01:49:26.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.558425 systemd[1]: Stopped target slices.target. Jul 2 01:49:26.571464 systemd[1]: Stopped target sockets.target. Jul 2 01:49:26.579233 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 01:49:26.579281 systemd[1]: Closed iscsid.socket. Jul 2 01:49:26.588364 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 01:49:26.588418 systemd[1]: Stopped ignition-setup.service. Jul 2 01:49:26.596659 systemd[1]: Stopping iscsiuio.service... Jul 2 01:49:26.607352 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 01:49:26.607807 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 01:49:26.607904 systemd[1]: Stopped iscsiuio.service. Jul 2 01:49:26.614947 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 01:49:26.615025 systemd[1]: Finished initrd-cleanup.service. Jul 2 01:49:26.625180 systemd[1]: Stopped target network.target. Jul 2 01:49:26.634076 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 01:49:26.634110 systemd[1]: Closed iscsiuio.socket. Jul 2 01:49:26.644536 systemd[1]: Stopping systemd-networkd.service... Jul 2 01:49:26.653752 systemd[1]: Stopping systemd-resolved.service... Jul 2 01:49:26.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:26.658638 systemd-networkd[844]: eth0: DHCPv6 lease lost Jul 2 01:49:26.937000 audit: BPF prog-id=9 op=UNLOAD Jul 2 01:49:26.663998 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 01:49:26.664112 systemd[1]: Stopped systemd-networkd.service. Jul 2 01:49:26.673955 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 01:49:26.673994 systemd[1]: Closed systemd-networkd.socket. Jul 2 01:49:26.689669 systemd[1]: Stopping network-cleanup.service... Jul 2 01:49:26.699162 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 01:49:26.699241 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 01:49:26.710156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 01:49:26.710207 systemd[1]: Stopped systemd-sysctl.service. Jul 2 01:49:26.722945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 01:49:26.722990 systemd[1]: Stopped systemd-modules-load.service. Jul 2 01:49:26.727869 systemd[1]: Stopping systemd-udevd.service... Jul 2 01:49:26.738310 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 01:49:26.738811 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 01:49:26.738924 systemd[1]: Stopped systemd-resolved.service. Jul 2 01:49:26.750933 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 01:49:26.751050 systemd[1]: Stopped systemd-udevd.service. Jul 2 01:49:26.759151 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 01:49:26.759232 systemd[1]: Stopped sysroot-boot.service. Jul 2 01:49:26.767363 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 01:49:26.767416 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 01:49:26.775786 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 01:49:26.775818 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 01:49:26.780461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 01:49:26.780515 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 01:49:26.788207 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 01:49:26.788250 systemd[1]: Stopped dracut-cmdline.service. Jul 2 01:49:26.797085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 01:49:26.797127 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 01:49:27.003089 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Jul 2 01:49:26.804947 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 01:49:26.804983 systemd[1]: Stopped initrd-setup-root.service. Jul 2 01:49:26.815277 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 01:49:26.822940 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 01:49:26.822995 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 01:49:26.836210 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 01:49:26.836253 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 01:49:26.840345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 01:49:26.840385 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 01:49:26.849877 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 01:49:26.850349 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 01:49:26.850447 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 01:49:26.924576 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 01:49:26.924685 systemd[1]: Stopped network-cleanup.service. Jul 2 01:49:26.933310 systemd[1]: Reached target initrd-switch-root.target. Jul 2 01:49:26.942974 systemd[1]: Starting initrd-switch-root.service... Jul 2 01:49:26.960681 systemd[1]: Switching root. Jul 2 01:49:27.003523 systemd-journald[276]: Journal stopped Jul 2 01:49:46.141845 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 01:49:46.141865 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 01:49:46.141876 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 01:49:46.141885 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 01:49:46.141893 kernel: SELinux: policy capability open_perms=1 Jul 2 01:49:46.141901 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 01:49:46.141910 kernel: SELinux: policy capability always_check_network=0 Jul 2 01:49:46.141918 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 01:49:46.141926 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 01:49:46.141935 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 01:49:46.141944 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 01:49:46.141952 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 2 01:49:46.141961 kernel: audit: type=1403 audit(1719884969.135:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 01:49:46.141970 systemd[1]: Successfully loaded SELinux policy in 333.828ms. Jul 2 01:49:46.141981 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.867ms. Jul 2 01:49:46.141992 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 01:49:46.142002 systemd[1]: Detected virtualization microsoft. Jul 2 01:49:46.142011 systemd[1]: Detected architecture arm64. Jul 2 01:49:46.142019 systemd[1]: Detected first boot. Jul 2 01:49:46.142029 systemd[1]: Hostname set to <ci-3510.3.5-a-267983ca13>. Jul 2 01:49:46.142038 systemd[1]: Initializing machine ID from random generator. Jul 2 01:49:46.142048 kernel: audit: type=1400 audit(1719884969.831:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 01:49:46.142072 kernel: audit: type=1400 audit(1719884969.831:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 01:49:46.142083 kernel: audit: type=1334 audit(1719884969.847:85): prog-id=10 op=LOAD Jul 2 01:49:46.142092 kernel: audit: type=1334 audit(1719884969.847:86): prog-id=10 op=UNLOAD Jul 2 01:49:46.142100 kernel: audit: type=1334 audit(1719884969.862:87): prog-id=11 op=LOAD Jul 2 01:49:46.142109 kernel: audit: type=1334 audit(1719884969.862:88): prog-id=11 op=UNLOAD Jul 2 01:49:46.142118 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 01:49:46.142127 kernel: audit: type=1400 audit(1719884971.114:89): avc: denied { associate } for pid=1074 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 01:49:46.142139 kernel: audit: type=1300 audit(1719884971.114:89): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b4 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1057 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:46.142148 kernel: audit: type=1327 audit(1719884971.114:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 01:49:46.142157 systemd[1]: Populated /etc with preset unit settings. Jul 2 01:49:46.142166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:49:46.142175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:49:46.142186 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:49:46.142196 kernel: kauditd_printk_skb: 6 callbacks suppressed Jul 2 01:49:46.142204 kernel: audit: type=1334 audit(1719884984.773:91): prog-id=12 op=LOAD Jul 2 01:49:46.142213 kernel: audit: type=1334 audit(1719884984.773:92): prog-id=3 op=UNLOAD Jul 2 01:49:46.142221 kernel: audit: type=1334 audit(1719884984.773:93): prog-id=13 op=LOAD Jul 2 01:49:46.142230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 01:49:46.142241 kernel: audit: type=1334 audit(1719884984.773:94): prog-id=14 op=LOAD Jul 2 01:49:46.142250 systemd[1]: Stopped initrd-switch-root.service. Jul 2 01:49:46.142260 kernel: audit: type=1334 audit(1719884984.773:95): prog-id=4 op=UNLOAD Jul 2 01:49:46.142269 kernel: audit: type=1334 audit(1719884984.773:96): prog-id=5 op=UNLOAD Jul 2 01:49:46.142278 kernel: audit: type=1334 audit(1719884984.773:97): prog-id=15 op=LOAD Jul 2 01:49:46.142287 kernel: audit: type=1334 audit(1719884984.773:98): prog-id=12 op=UNLOAD Jul 2 01:49:46.142295 kernel: audit: type=1334 audit(1719884984.773:99): prog-id=16 op=LOAD Jul 2 01:49:46.142304 kernel: audit: type=1334 audit(1719884984.774:100): prog-id=17 op=LOAD Jul 2 01:49:46.142313 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 01:49:46.142322 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 01:49:46.142332 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 01:49:46.142342 systemd[1]: Created slice system-getty.slice. Jul 2 01:49:46.142351 systemd[1]: Created slice system-modprobe.slice. Jul 2 01:49:46.142361 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 01:49:46.142370 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 01:49:46.142379 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 01:49:46.142388 systemd[1]: Created slice user.slice. Jul 2 01:49:46.142398 systemd[1]: Started systemd-ask-password-console.path. Jul 2 01:49:46.142407 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 01:49:46.142416 systemd[1]: Set up automount boot.automount. Jul 2 01:49:46.142427 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 01:49:46.142439 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 01:49:46.142448 systemd[1]: Stopped target initrd-fs.target. Jul 2 01:49:46.142457 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 01:49:46.142466 systemd[1]: Reached target integritysetup.target. Jul 2 01:49:46.142476 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 01:49:46.142485 systemd[1]: Reached target remote-fs.target. Jul 2 01:49:46.142496 systemd[1]: Reached target slices.target. Jul 2 01:49:46.142505 systemd[1]: Reached target swap.target. Jul 2 01:49:46.142514 systemd[1]: Reached target torcx.target. Jul 2 01:49:46.142523 systemd[1]: Reached target veritysetup.target. Jul 2 01:49:46.142533 systemd[1]: Listening on systemd-coredump.socket. Jul 2 01:49:46.142542 systemd[1]: Listening on systemd-initctl.socket. Jul 2 01:49:46.142551 systemd[1]: Listening on systemd-networkd.socket. Jul 2 01:49:46.142562 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 01:49:46.142571 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 01:49:46.142581 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 01:49:46.142590 systemd[1]: Mounting dev-hugepages.mount... Jul 2 01:49:46.142600 systemd[1]: Mounting dev-mqueue.mount... Jul 2 01:49:46.142609 systemd[1]: Mounting media.mount... Jul 2 01:49:46.142619 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 01:49:46.142630 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 01:49:46.142639 systemd[1]: Mounting tmp.mount... Jul 2 01:49:46.142648 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 01:49:46.142658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:46.142668 systemd[1]: Starting kmod-static-nodes.service... Jul 2 01:49:46.142677 systemd[1]: Starting modprobe@configfs.service... Jul 2 01:49:46.142687 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:46.142696 systemd[1]: Starting modprobe@drm.service... Jul 2 01:49:46.142706 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:46.142717 systemd[1]: Starting modprobe@fuse.service... Jul 2 01:49:46.142727 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:46.142737 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 01:49:46.142747 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 01:49:46.142757 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 01:49:46.142767 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 01:49:46.142776 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 01:49:46.142785 systemd[1]: Stopped systemd-journald.service. Jul 2 01:49:46.142794 systemd[1]: systemd-journald.service: Consumed 2.793s CPU time. Jul 2 01:49:46.142805 systemd[1]: Starting systemd-journald.service... Jul 2 01:49:46.142814 systemd[1]: Starting systemd-modules-load.service... Jul 2 01:49:46.142824 systemd[1]: Starting systemd-network-generator.service... Jul 2 01:49:46.142834 systemd[1]: Starting systemd-remount-fs.service... Jul 2 01:49:46.142843 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 01:49:46.142852 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 01:49:46.142861 systemd[1]: Stopped verity-setup.service. Jul 2 01:49:46.142871 systemd[1]: Mounted dev-hugepages.mount. Jul 2 01:49:46.142880 systemd[1]: Mounted dev-mqueue.mount. Jul 2 01:49:46.142890 systemd[1]: Mounted media.mount. Jul 2 01:49:46.142900 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 01:49:46.142909 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 01:49:46.142919 systemd[1]: Mounted tmp.mount. Jul 2 01:49:46.142928 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 01:49:46.142937 systemd[1]: Finished kmod-static-nodes.service. Jul 2 01:49:46.142947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:46.142956 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:46.142965 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 01:49:46.142976 systemd[1]: Finished modprobe@drm.service. Jul 2 01:49:46.142988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:46.142999 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:46.143012 systemd-journald[1152]: Journal started Jul 2 01:49:46.143049 systemd-journald[1152]: Runtime Journal (/run/log/journal/e827a1de70e043ca8499264bf85dcfb0) is 8.0M, max 78.6M, 70.6M free. Jul 2 01:49:29.135000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 01:49:29.831000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 01:49:29.831000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 01:49:29.847000 audit: BPF prog-id=10 op=LOAD Jul 2 01:49:29.847000 audit: BPF prog-id=10 op=UNLOAD Jul 2 01:49:29.862000 audit: BPF prog-id=11 op=LOAD Jul 2 01:49:29.862000 audit: BPF prog-id=11 op=UNLOAD Jul 2 01:49:31.114000 audit[1074]: AVC avc: denied { associate } for pid=1074 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 01:49:31.114000 audit[1074]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b4 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1057 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:31.114000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 01:49:31.122000 audit[1074]: AVC avc: denied { associate } for pid=1074 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 01:49:31.122000 audit[1074]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145999 a2=1ed a3=0 items=2 ppid=1057 pid=1074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:31.122000 audit: CWD cwd="/" Jul 2 01:49:31.122000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:31.122000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:31.122000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 01:49:44.773000 audit: BPF prog-id=12 op=LOAD Jul 2 01:49:44.773000 audit: BPF prog-id=3 op=UNLOAD Jul 2 01:49:44.773000 audit: BPF prog-id=13 op=LOAD Jul 2 01:49:44.773000 audit: BPF prog-id=14 op=LOAD Jul 2 01:49:44.773000 audit: BPF prog-id=4 op=UNLOAD Jul 2 01:49:44.773000 audit: BPF prog-id=5 op=UNLOAD Jul 2 01:49:44.773000 audit: BPF prog-id=15 op=LOAD Jul 2 01:49:44.773000 audit: BPF prog-id=12 op=UNLOAD Jul 2 01:49:44.773000 audit: BPF prog-id=16 op=LOAD Jul 2 01:49:44.774000 audit: BPF prog-id=17 op=LOAD Jul 2 01:49:44.774000 audit: BPF prog-id=13 op=UNLOAD Jul 2 01:49:44.774000 audit: BPF prog-id=14 op=UNLOAD Jul 2 01:49:44.779000 audit: BPF prog-id=18 op=LOAD Jul 2 01:49:44.779000 audit: BPF prog-id=15 op=UNLOAD Jul 2 01:49:44.784000 audit: BPF prog-id=19 op=LOAD Jul 2 01:49:44.790000 audit: BPF prog-id=20 op=LOAD Jul 2 01:49:44.790000 audit: BPF prog-id=16 op=UNLOAD Jul 2 01:49:44.790000 audit: BPF prog-id=17 op=UNLOAD Jul 2 01:49:44.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:44.822000 audit: BPF prog-id=18 op=UNLOAD Jul 2 01:49:44.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:44.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.303000 audit: BPF prog-id=21 op=LOAD Jul 2 01:49:45.303000 audit: BPF prog-id=22 op=LOAD Jul 2 01:49:45.303000 audit: BPF prog-id=23 op=LOAD Jul 2 01:49:45.303000 audit: BPF prog-id=19 op=UNLOAD Jul 2 01:49:45.303000 audit: BPF prog-id=20 op=UNLOAD Jul 2 01:49:45.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:45.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.139000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 01:49:46.139000 audit[1152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd6bf5630 a2=4000 a3=1 items=0 ppid=1 pid=1152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:46.139000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 01:49:44.772150 systemd[1]: Queued start job for default target multi-user.target. Jul 2 01:49:31.041792 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:49:44.790836 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 01:49:31.057167 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 01:49:44.791188 systemd[1]: systemd-journald.service: Consumed 2.793s CPU time. Jul 2 01:49:31.057185 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 01:49:31.057221 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 01:49:31.057231 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 01:49:31.057260 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 01:49:31.057272 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 01:49:31.057467 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 01:49:31.057498 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 01:49:31.057509 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 01:49:31.100186 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 01:49:31.100247 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 01:49:31.100272 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 01:49:31.100287 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 01:49:31.100309 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 01:49:31.100322 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:31Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 01:49:41.146338 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 01:49:41.146628 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 01:49:41.146737 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 01:49:41.146925 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 01:49:41.146972 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 01:49:41.147035 /usr/lib/systemd/system-generators/torcx-generator[1074]: time="2024-07-02T01:49:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 01:49:46.151627 kernel: loop: module loaded Jul 2 01:49:46.151659 systemd[1]: Started systemd-journald.service. Jul 2 01:49:46.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.161092 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:46.161295 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:46.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.165711 systemd[1]: Finished systemd-remount-fs.service. Jul 2 01:49:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.170728 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 01:49:46.172610 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 01:49:46.177953 systemd[1]: Starting systemd-journal-flush.service... Jul 2 01:49:46.182107 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:46.183015 systemd[1]: Starting systemd-random-seed.service... Jul 2 01:49:46.187129 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:46.188151 systemd[1]: Starting systemd-sysusers.service... Jul 2 01:49:46.348250 systemd[1]: Finished systemd-network-generator.service. Jul 2 01:49:46.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.353262 systemd[1]: Reached target network-pre.target. Jul 2 01:49:46.391967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 01:49:46.392133 systemd[1]: Finished modprobe@configfs.service. Jul 2 01:49:46.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.397776 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 01:49:46.402413 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 01:49:46.786089 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 01:49:46.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.791768 systemd[1]: Starting systemd-udev-settle.service... Jul 2 01:49:46.802257 udevadm[1191]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 01:49:46.812759 systemd-journald[1152]: Time spent on flushing to /var/log/journal/e827a1de70e043ca8499264bf85dcfb0 is 13.147ms for 1095 entries. Jul 2 01:49:46.812759 systemd-journald[1152]: System Journal (/var/log/journal/e827a1de70e043ca8499264bf85dcfb0) is 8.0M, max 2.6G, 2.6G free. Jul 2 01:49:48.586820 kernel: fuse: init (API version 7.34) Jul 2 01:49:48.586892 systemd-journald[1152]: Received client request to flush runtime journal. Jul 2 01:49:46.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:47.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:47.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:46.820519 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 01:49:46.820649 systemd[1]: Finished modprobe@fuse.service. Jul 2 01:49:46.825828 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 01:49:46.830781 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 01:49:46.896113 systemd[1]: Finished systemd-modules-load.service. Jul 2 01:49:46.902328 systemd[1]: Starting systemd-sysctl.service... Jul 2 01:49:47.275839 systemd[1]: Finished systemd-random-seed.service. Jul 2 01:49:47.280679 systemd[1]: Reached target first-boot-complete.target. Jul 2 01:49:47.593322 systemd[1]: Finished systemd-sysctl.service. Jul 2 01:49:48.587813 systemd[1]: Finished systemd-journal-flush.service. Jul 2 01:49:48.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:49.810251 systemd[1]: Finished systemd-sysusers.service. Jul 2 01:49:49.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:49.817272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 01:49:49.819816 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 2 01:49:49.819866 kernel: audit: type=1130 audit(1719884989.814:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:51.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:51.502456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 01:49:51.526079 kernel: audit: type=1130 audit(1719884991.507:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:53.800218 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 01:49:53.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:53.822000 audit: BPF prog-id=24 op=LOAD Jul 2 01:49:53.831567 kernel: audit: type=1130 audit(1719884993.805:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:53.831613 kernel: audit: type=1334 audit(1719884993.822:149): prog-id=24 op=LOAD Jul 2 01:49:53.831613 systemd[1]: Starting systemd-udevd.service... Jul 2 01:49:53.822000 audit: BPF prog-id=25 op=LOAD Jul 2 01:49:53.840063 kernel: audit: type=1334 audit(1719884993.822:150): prog-id=25 op=LOAD Jul 2 01:49:53.822000 audit: BPF prog-id=7 op=UNLOAD Jul 2 01:49:53.845418 kernel: audit: type=1334 audit(1719884993.822:151): prog-id=7 op=UNLOAD Jul 2 01:49:53.822000 audit: BPF prog-id=8 op=UNLOAD Jul 2 01:49:53.850972 kernel: audit: type=1334 audit(1719884993.822:152): prog-id=8 op=UNLOAD Jul 2 01:49:53.863522 systemd-udevd[1199]: Using default interface naming scheme 'v252'. Jul 2 01:49:54.292742 systemd[1]: Started systemd-udevd.service. Jul 2 01:49:54.298761 systemd[1]: Starting systemd-networkd.service... Jul 2 01:49:54.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.298000 audit: BPF prog-id=26 op=LOAD Jul 2 01:49:54.332276 kernel: audit: type=1130 audit(1719884994.297:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.332345 kernel: audit: type=1334 audit(1719884994.298:154): prog-id=26 op=LOAD Jul 2 01:49:54.349506 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 2 01:49:54.355385 systemd[1]: Starting systemd-userdbd.service... Jul 2 01:49:54.354000 audit: BPF prog-id=27 op=LOAD Jul 2 01:49:54.354000 audit: BPF prog-id=28 op=LOAD Jul 2 01:49:54.354000 audit: BPF prog-id=29 op=LOAD Jul 2 01:49:54.365097 kernel: audit: type=1334 audit(1719884994.354:155): prog-id=27 op=LOAD Jul 2 01:49:54.411966 systemd[1]: Started systemd-userdbd.service. Jul 2 01:49:54.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.445000 audit[1218]: AVC avc: denied { confidentiality } for pid=1218 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 01:49:54.470877 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 01:49:54.470976 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 01:49:54.471019 kernel: hv_vmbus: registering driver hv_balloon Jul 2 01:49:54.471048 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 01:49:54.471085 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 01:49:54.483331 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 01:49:54.483386 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 2 01:49:54.496240 kernel: Console: switching to colour dummy device 80x25 Jul 2 01:49:54.502821 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 01:49:54.445000 audit[1218]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac70749f0 a1=aa2c a2=ffff986c24b0 a3=aaaac6fcb010 items=12 ppid=1199 pid=1218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:54.445000 audit: CWD cwd="/" Jul 2 01:49:54.445000 audit: PATH item=0 name=(null) inode=6639 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=1 name=(null) inode=11508 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=2 name=(null) inode=11508 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=3 name=(null) inode=11509 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=4 name=(null) inode=11508 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=5 name=(null) inode=11510 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=6 name=(null) inode=11508 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=7 name=(null) inode=11511 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=8 name=(null) inode=11508 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=9 name=(null) inode=11512 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=10 name=(null) inode=11508 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PATH item=11 name=(null) inode=11513 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 01:49:54.445000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 01:49:54.537846 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 01:49:54.537912 kernel: hv_vmbus: registering driver hv_utils Jul 2 01:49:54.545456 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 01:49:54.545590 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 01:49:54.548871 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 01:49:54.484509 systemd-networkd[1206]: lo: Link UP Jul 2 01:49:54.545834 systemd-journald[1152]: Time jumped backwards, rotating. Jul 2 01:49:54.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.489336 systemd-networkd[1206]: lo: Gained carrier Jul 2 01:49:54.489903 systemd-networkd[1206]: Enumeration completed Jul 2 01:49:54.490015 systemd[1]: Started systemd-networkd.service. Jul 2 01:49:54.495740 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 01:49:54.516334 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 01:49:54.570776 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1201) Jul 2 01:49:54.570841 kernel: mlx5_core 6eb8:00:02.0 enP28344s1: Link up Jul 2 01:49:54.597089 kernel: hv_netvsc 0022487c-da80-0022-487c-da800022487c eth0: Data path switched to VF: enP28344s1 Jul 2 01:49:54.598099 systemd-networkd[1206]: enP28344s1: Link UP Jul 2 01:49:54.598633 systemd-networkd[1206]: eth0: Link UP Jul 2 01:49:54.598697 systemd-networkd[1206]: eth0: Gained carrier Jul 2 01:49:54.600176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 01:49:54.606324 systemd-networkd[1206]: enP28344s1: Gained carrier Jul 2 01:49:54.609138 systemd[1]: Finished systemd-udev-settle.service. Jul 2 01:49:54.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.617648 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 2 01:49:54.617692 kernel: audit: type=1130 audit(1719884994.613:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.617621 systemd[1]: Starting lvm2-activation-early.service... Jul 2 01:49:54.639917 systemd-networkd[1206]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 01:49:54.895143 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 01:49:54.941678 systemd[1]: Finished lvm2-activation-early.service. Jul 2 01:49:54.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.947444 systemd[1]: Reached target cryptsetup.target. Jul 2 01:49:54.970261 kernel: audit: type=1130 audit(1719884994.946:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:54.971959 systemd[1]: Starting lvm2-activation.service... Jul 2 01:49:54.975869 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 01:49:54.995722 systemd[1]: Finished lvm2-activation.service. Jul 2 01:49:54.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.001913 systemd[1]: Reached target local-fs-pre.target. Jul 2 01:49:55.022595 kernel: audit: type=1130 audit(1719884994.999:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.022967 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 01:49:55.023083 systemd[1]: Reached target local-fs.target. Jul 2 01:49:55.027671 systemd[1]: Reached target machines.target. Jul 2 01:49:55.033208 systemd[1]: Starting ldconfig.service... Jul 2 01:49:55.052473 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.052616 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:55.053893 systemd[1]: Starting systemd-boot-update.service... Jul 2 01:49:55.060077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 01:49:55.066720 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 01:49:55.072950 systemd[1]: Starting systemd-sysext.service... Jul 2 01:49:55.089573 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1281 (bootctl) Jul 2 01:49:55.090650 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 01:49:55.125689 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 01:49:55.195016 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 01:49:55.195220 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 01:49:55.205367 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 01:49:55.205925 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 01:49:55.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.230911 kernel: audit: type=1130 audit(1719884995.210:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.250981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 01:49:55.260315 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 01:49:55.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.278776 kernel: audit: type=1130 audit(1719884995.259:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.322789 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 01:49:55.347781 kernel: loop1: detected capacity change from 0 to 193208 Jul 2 01:49:55.350991 (sd-sysext)[1294]: Using extensions 'kubernetes'. Jul 2 01:49:55.351315 (sd-sysext)[1294]: Merged extensions into '/usr'. Jul 2 01:49:55.366945 systemd[1]: Mounting usr-share-oem.mount... Jul 2 01:49:55.370932 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.372419 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:55.377364 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:55.382502 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:55.386590 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.386995 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:55.389832 systemd[1]: Mounted usr-share-oem.mount. Jul 2 01:49:55.394128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:55.394356 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:55.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.400107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:55.400218 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:55.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.436940 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:55.437046 kernel: audit: type=1130 audit(1719884995.397:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.437087 kernel: audit: type=1131 audit(1719884995.399:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.437277 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:55.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.458203 kernel: audit: type=1130 audit(1719884995.435:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.459146 systemd[1]: Finished systemd-sysext.service. Jul 2 01:49:55.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.480660 kernel: audit: type=1131 audit(1719884995.435:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.497625 kernel: audit: type=1130 audit(1719884995.456:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.497861 systemd[1]: Starting ensure-sysext.service... Jul 2 01:49:55.502132 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:55.502295 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.503599 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 01:49:55.511317 systemd[1]: Reloading. Jul 2 01:49:55.517447 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 01:49:55.534517 systemd-fsck[1291]: fsck.fat 4.2 (2021-01-31) Jul 2 01:49:55.534517 systemd-fsck[1291]: /dev/sda1: 236 files, 117047/258078 clusters Jul 2 01:49:55.537071 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 01:49:55.552351 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 01:49:55.567963 /usr/lib/systemd/system-generators/torcx-generator[1321]: time="2024-07-02T01:49:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:49:55.571838 /usr/lib/systemd/system-generators/torcx-generator[1321]: time="2024-07-02T01:49:55Z" level=info msg="torcx already run" Jul 2 01:49:55.636826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:49:55.636844 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:49:55.652531 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:49:55.716000 audit: BPF prog-id=30 op=LOAD Jul 2 01:49:55.716000 audit: BPF prog-id=27 op=UNLOAD Jul 2 01:49:55.716000 audit: BPF prog-id=31 op=LOAD Jul 2 01:49:55.716000 audit: BPF prog-id=32 op=LOAD Jul 2 01:49:55.716000 audit: BPF prog-id=28 op=UNLOAD Jul 2 01:49:55.716000 audit: BPF prog-id=29 op=UNLOAD Jul 2 01:49:55.718000 audit: BPF prog-id=33 op=LOAD Jul 2 01:49:55.718000 audit: BPF prog-id=21 op=UNLOAD Jul 2 01:49:55.719000 audit: BPF prog-id=34 op=LOAD Jul 2 01:49:55.719000 audit: BPF prog-id=35 op=LOAD Jul 2 01:49:55.719000 audit: BPF prog-id=22 op=UNLOAD Jul 2 01:49:55.719000 audit: BPF prog-id=23 op=UNLOAD Jul 2 01:49:55.719000 audit: BPF prog-id=36 op=LOAD Jul 2 01:49:55.719000 audit: BPF prog-id=37 op=LOAD Jul 2 01:49:55.719000 audit: BPF prog-id=24 op=UNLOAD Jul 2 01:49:55.719000 audit: BPF prog-id=25 op=UNLOAD Jul 2 01:49:55.719000 audit: BPF prog-id=38 op=LOAD Jul 2 01:49:55.719000 audit: BPF prog-id=26 op=UNLOAD Jul 2 01:49:55.722231 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 01:49:55.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.733390 systemd[1]: Mounting boot.mount... Jul 2 01:49:55.739626 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.741294 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:55.746236 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:55.752392 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:55.756054 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.756169 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:55.758345 systemd[1]: Mounted boot.mount. Jul 2 01:49:55.765464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:55.765597 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:55.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.770434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:55.770547 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:55.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.775448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:55.775569 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:55.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.780354 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:55.780447 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.782103 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.783454 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:55.788295 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:55.793316 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:55.797019 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.797183 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:55.798010 systemd[1]: Finished systemd-boot-update.service. Jul 2 01:49:55.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.802820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:55.802932 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:55.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.807624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:55.807777 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:55.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.812728 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:55.812884 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:55.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.817507 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:55.817612 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.819888 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.821105 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 01:49:55.826003 systemd[1]: Starting modprobe@drm.service... Jul 2 01:49:55.830705 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 01:49:55.835815 systemd[1]: Starting modprobe@loop.service... Jul 2 01:49:55.840739 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.840874 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:49:55.841819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 01:49:55.841955 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 01:49:55.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.846740 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 01:49:55.846890 systemd[1]: Finished modprobe@drm.service. Jul 2 01:49:55.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.851318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 01:49:55.851429 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 01:49:55.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.856529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 01:49:55.856656 systemd[1]: Finished modprobe@loop.service. Jul 2 01:49:55.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:55.861967 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 01:49:55.862039 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 01:49:55.863123 systemd[1]: Finished ensure-sysext.service. Jul 2 01:49:55.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.047882 systemd-networkd[1206]: eth0: Gained IPv6LL Jul 2 01:49:56.052647 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 01:49:56.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.107505 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 01:49:56.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.113986 systemd[1]: Starting audit-rules.service... Jul 2 01:49:56.118803 systemd[1]: Starting clean-ca-certificates.service... Jul 2 01:49:56.124362 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 01:49:56.129000 audit: BPF prog-id=39 op=LOAD Jul 2 01:49:56.131053 systemd[1]: Starting systemd-resolved.service... Jul 2 01:49:56.135000 audit: BPF prog-id=40 op=LOAD Jul 2 01:49:56.136564 systemd[1]: Starting systemd-timesyncd.service... Jul 2 01:49:56.141469 systemd[1]: Starting systemd-update-utmp.service... Jul 2 01:49:56.189000 audit[1401]: SYSTEM_BOOT pid=1401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.193248 systemd[1]: Finished clean-ca-certificates.service. Jul 2 01:49:56.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.198306 systemd[1]: Finished systemd-update-utmp.service. Jul 2 01:49:56.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.202894 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 01:49:56.217774 systemd[1]: Started systemd-timesyncd.service. Jul 2 01:49:56.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.222383 systemd[1]: Reached target time-set.target. Jul 2 01:49:56.300089 systemd-resolved[1398]: Positive Trust Anchors: Jul 2 01:49:56.300427 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 01:49:56.300509 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 01:49:56.303876 systemd-resolved[1398]: Using system hostname 'ci-3510.3.5-a-267983ca13'. Jul 2 01:49:56.305374 systemd[1]: Started systemd-resolved.service. Jul 2 01:49:56.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.309851 systemd[1]: Reached target network.target. Jul 2 01:49:56.315096 systemd[1]: Reached target network-online.target. Jul 2 01:49:56.319625 systemd[1]: Reached target nss-lookup.target. Jul 2 01:49:56.346570 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 01:49:56.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 01:49:56.460041 systemd-timesyncd[1400]: Contacted time server 50.205.57.38:123 (0.flatcar.pool.ntp.org). Jul 2 01:49:56.460113 systemd-timesyncd[1400]: Initial clock synchronization to Tue 2024-07-02 01:49:56.452442 UTC. Jul 2 01:49:56.487000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 01:49:56.487000 audit[1417]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff14cebc0 a2=420 a3=0 items=0 ppid=1395 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 01:49:56.487000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 01:49:56.532587 augenrules[1417]: No rules Jul 2 01:49:56.533627 systemd[1]: Finished audit-rules.service. Jul 2 01:50:02.768048 ldconfig[1280]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 01:50:02.781309 systemd[1]: Finished ldconfig.service. Jul 2 01:50:02.787097 systemd[1]: Starting systemd-update-done.service... Jul 2 01:50:02.811366 systemd[1]: Finished systemd-update-done.service. Jul 2 01:50:02.816183 systemd[1]: Reached target sysinit.target. Jul 2 01:50:02.820276 systemd[1]: Started motdgen.path. Jul 2 01:50:02.823795 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 01:50:02.829478 systemd[1]: Started logrotate.timer. Jul 2 01:50:02.833288 systemd[1]: Started mdadm.timer. Jul 2 01:50:02.836604 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 01:50:02.840913 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 01:50:02.840944 systemd[1]: Reached target paths.target. Jul 2 01:50:02.844698 systemd[1]: Reached target timers.target. Jul 2 01:50:02.849174 systemd[1]: Listening on dbus.socket. Jul 2 01:50:02.853712 systemd[1]: Starting docker.socket... Jul 2 01:50:02.873530 systemd[1]: Listening on sshd.socket. Jul 2 01:50:02.877456 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:50:02.878004 systemd[1]: Listening on docker.socket. Jul 2 01:50:02.881984 systemd[1]: Reached target sockets.target. Jul 2 01:50:02.885941 systemd[1]: Reached target basic.target. Jul 2 01:50:02.889771 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 01:50:02.889807 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 01:50:02.890850 systemd[1]: Starting containerd.service... Jul 2 01:50:02.895112 systemd[1]: Starting dbus.service... Jul 2 01:50:02.898990 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 01:50:02.903839 systemd[1]: Starting extend-filesystems.service... Jul 2 01:50:02.910280 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 01:50:02.937999 systemd[1]: Starting kubelet.service... Jul 2 01:50:02.942492 systemd[1]: Starting motdgen.service... Jul 2 01:50:02.946846 systemd[1]: Started nvidia.service. Jul 2 01:50:02.951632 systemd[1]: Starting prepare-helm.service... Jul 2 01:50:02.956899 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 01:50:02.961876 systemd[1]: Starting sshd-keygen.service... Jul 2 01:50:02.967496 systemd[1]: Starting systemd-logind.service... Jul 2 01:50:02.971182 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 01:50:02.971263 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 01:50:02.971716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 01:50:02.972482 systemd[1]: Starting update-engine.service... Jul 2 01:50:02.977804 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 01:50:02.989602 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 01:50:02.989817 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 01:50:03.005231 jq[1443]: true Jul 2 01:50:03.007242 jq[1427]: false Jul 2 01:50:03.021804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 01:50:03.021951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 01:50:03.035192 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 01:50:03.035390 systemd[1]: Finished motdgen.service. Jul 2 01:50:03.059923 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 01:50:03.060383 systemd-logind[1438]: New seat seat0. Jul 2 01:50:03.062047 jq[1449]: true Jul 2 01:50:03.069875 extend-filesystems[1428]: Found loop1 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda1 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda2 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda3 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found usr Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda4 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda6 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda7 Jul 2 01:50:03.074502 extend-filesystems[1428]: Found sda9 Jul 2 01:50:03.074502 extend-filesystems[1428]: Checking size of /dev/sda9 Jul 2 01:50:03.150803 tar[1446]: linux-arm64/helm Jul 2 01:50:03.151909 env[1451]: time="2024-07-02T01:50:03.116484495Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 01:50:03.177517 extend-filesystems[1428]: Old size kept for /dev/sda9 Jul 2 01:50:03.177517 extend-filesystems[1428]: Found sr0 Jul 2 01:50:03.182736 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 01:50:03.182947 systemd[1]: Finished extend-filesystems.service. Jul 2 01:50:03.205098 env[1451]: time="2024-07-02T01:50:03.204738309Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 01:50:03.205098 env[1451]: time="2024-07-02T01:50:03.204904297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:50:03.209100 env[1451]: time="2024-07-02T01:50:03.209070260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 01:50:03.209190 env[1451]: time="2024-07-02T01:50:03.209176306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:50:03.209509 env[1451]: time="2024-07-02T01:50:03.209488287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 01:50:03.213803 env[1451]: time="2024-07-02T01:50:03.213780730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 01:50:03.213911 env[1451]: time="2024-07-02T01:50:03.213895134Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 01:50:03.214164 env[1451]: time="2024-07-02T01:50:03.214136618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 01:50:03.214347 env[1451]: time="2024-07-02T01:50:03.214331276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:50:03.215516 env[1451]: time="2024-07-02T01:50:03.215495868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 01:50:03.217899 env[1451]: time="2024-07-02T01:50:03.217874596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 01:50:03.219032 env[1451]: time="2024-07-02T01:50:03.219011556Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 01:50:03.220727 env[1451]: time="2024-07-02T01:50:03.220702541Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 01:50:03.220834 env[1451]: time="2024-07-02T01:50:03.220819744Z" level=info msg="metadata content store policy set" policy=shared Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236439085Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236472995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236485871Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236542933Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236560167Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236574723Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.236776 env[1451]: time="2024-07-02T01:50:03.236588478Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237311889Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237343719Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237358035Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237370191Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237382387Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237482755Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 01:50:03.237711 env[1451]: time="2024-07-02T01:50:03.237549454Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239652709Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239687178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239701534Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239760675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239775790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239787707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239798743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239810419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239823375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239835292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239847048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.239899 env[1451]: time="2024-07-02T01:50:03.239861323Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 01:50:03.243678 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Jul 2 01:50:03.244178 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251737248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251777955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251805667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251819142Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251844774Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251857770Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251875684Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 01:50:03.254019 env[1451]: time="2024-07-02T01:50:03.251918111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 01:50:03.254241 env[1451]: time="2024-07-02T01:50:03.252165753Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 01:50:03.254241 env[1451]: time="2024-07-02T01:50:03.252221375Z" level=info msg="Connect containerd service" Jul 2 01:50:03.254241 env[1451]: time="2024-07-02T01:50:03.252264721Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255111781Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255266972Z" level=info msg="Start subscribing containerd event" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255329912Z" level=info msg="Start recovering state" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255390373Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255396171Z" level=info msg="Start event monitor" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255420803Z" level=info msg="Start snapshots syncer" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255427001Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255430800Z" level=info msg="Start cni network conf syncer for default" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.255439358Z" level=info msg="Start streaming server" Jul 2 01:50:03.270151 env[1451]: time="2024-07-02T01:50:03.268163094Z" level=info msg="containerd successfully booted in 0.152362s" Jul 2 01:50:03.255534 systemd[1]: Started containerd.service. Jul 2 01:50:03.278042 dbus-daemon[1426]: [system] SELinux support is enabled Jul 2 01:50:03.278189 systemd[1]: Started dbus.service. Jul 2 01:50:03.284556 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 01:50:03.283671 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 01:50:03.283690 systemd[1]: Reached target system-config.target. Jul 2 01:50:03.291623 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 01:50:03.291642 systemd[1]: Reached target user-config.target. Jul 2 01:50:03.298081 systemd[1]: Started systemd-logind.service. Jul 2 01:50:03.405126 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 01:50:03.743187 update_engine[1440]: I0702 01:50:03.726891 1440 main.cc:92] Flatcar Update Engine starting Jul 2 01:50:03.779416 systemd[1]: Started update-engine.service. Jul 2 01:50:03.786089 systemd[1]: Started locksmithd.service. Jul 2 01:50:03.790076 update_engine[1440]: I0702 01:50:03.789733 1440 update_check_scheduler.cc:74] Next update check in 11m19s Jul 2 01:50:03.869228 tar[1446]: linux-arm64/LICENSE Jul 2 01:50:03.869228 tar[1446]: linux-arm64/README.md Jul 2 01:50:03.873520 systemd[1]: Finished prepare-helm.service. Jul 2 01:50:03.943120 systemd[1]: Started kubelet.service. Jul 2 01:50:04.571651 kubelet[1532]: E0702 01:50:04.571582 1532 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:04.573906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:04.574029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:04.686938 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 01:50:04.704209 systemd[1]: Finished sshd-keygen.service. Jul 2 01:50:04.709894 systemd[1]: Starting issuegen.service... Jul 2 01:50:04.714476 systemd[1]: Started waagent.service. Jul 2 01:50:04.718954 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 01:50:04.719123 systemd[1]: Finished issuegen.service. Jul 2 01:50:04.724226 systemd[1]: Starting systemd-user-sessions.service... Jul 2 01:50:04.757112 systemd[1]: Finished systemd-user-sessions.service. Jul 2 01:50:04.763201 systemd[1]: Started getty@tty1.service. Jul 2 01:50:04.768491 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 2 01:50:04.773356 systemd[1]: Reached target getty.target. Jul 2 01:50:04.777364 systemd[1]: Reached target multi-user.target. Jul 2 01:50:04.782671 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 01:50:04.790964 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 01:50:04.791117 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 01:50:04.796433 systemd[1]: Startup finished in 711ms (kernel) + 13.019s (initrd) + 36.456s (userspace) = 50.187s. Jul 2 01:50:05.055853 locksmithd[1528]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 01:50:05.316078 login[1556]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jul 2 01:50:05.336661 login[1557]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 01:50:05.377077 systemd[1]: Created slice user-500.slice. Jul 2 01:50:05.378118 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 01:50:05.380535 systemd-logind[1438]: New session 1 of user core. Jul 2 01:50:05.416067 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 01:50:05.417560 systemd[1]: Starting user@500.service... Jul 2 01:50:05.448515 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:05.622628 systemd[1560]: Queued start job for default target default.target. Jul 2 01:50:05.623141 systemd[1560]: Reached target paths.target. Jul 2 01:50:05.623161 systemd[1560]: Reached target sockets.target. Jul 2 01:50:05.623171 systemd[1560]: Reached target timers.target. Jul 2 01:50:05.623181 systemd[1560]: Reached target basic.target. Jul 2 01:50:05.623221 systemd[1560]: Reached target default.target. Jul 2 01:50:05.623244 systemd[1560]: Startup finished in 168ms. Jul 2 01:50:05.623296 systemd[1]: Started user@500.service. Jul 2 01:50:05.624279 systemd[1]: Started session-1.scope. Jul 2 01:50:06.316811 login[1556]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 01:50:06.320777 systemd-logind[1438]: New session 2 of user core. Jul 2 01:50:06.321186 systemd[1]: Started session-2.scope. Jul 2 01:50:10.719008 waagent[1553]: 2024-07-02T01:50:10.718886Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 2 01:50:10.726321 waagent[1553]: 2024-07-02T01:50:10.726232Z INFO Daemon Daemon OS: flatcar 3510.3.5 Jul 2 01:50:10.731037 waagent[1553]: 2024-07-02T01:50:10.730969Z INFO Daemon Daemon Python: 3.9.16 Jul 2 01:50:10.736946 waagent[1553]: 2024-07-02T01:50:10.736867Z INFO Daemon Daemon Run daemon Jul 2 01:50:10.741769 waagent[1553]: 2024-07-02T01:50:10.741691Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.5' Jul 2 01:50:10.759717 waagent[1553]: 2024-07-02T01:50:10.759588Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 01:50:10.775329 waagent[1553]: 2024-07-02T01:50:10.775186Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 01:50:10.784591 waagent[1553]: 2024-07-02T01:50:10.784518Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 01:50:10.789377 waagent[1553]: 2024-07-02T01:50:10.789318Z INFO Daemon Daemon Using waagent for provisioning Jul 2 01:50:10.795101 waagent[1553]: 2024-07-02T01:50:10.795044Z INFO Daemon Daemon Activate resource disk Jul 2 01:50:10.799569 waagent[1553]: 2024-07-02T01:50:10.799511Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 01:50:10.814063 waagent[1553]: 2024-07-02T01:50:10.814006Z INFO Daemon Daemon Found device: None Jul 2 01:50:10.818427 waagent[1553]: 2024-07-02T01:50:10.818370Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 01:50:10.826775 waagent[1553]: 2024-07-02T01:50:10.826703Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 01:50:10.838520 waagent[1553]: 2024-07-02T01:50:10.838462Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 01:50:10.844566 waagent[1553]: 2024-07-02T01:50:10.844511Z INFO Daemon Daemon Running default provisioning handler Jul 2 01:50:10.858098 waagent[1553]: 2024-07-02T01:50:10.857999Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 01:50:10.873500 waagent[1553]: 2024-07-02T01:50:10.873364Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 01:50:10.883250 waagent[1553]: 2024-07-02T01:50:10.883160Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 01:50:10.888822 waagent[1553]: 2024-07-02T01:50:10.888711Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 01:50:10.950078 waagent[1553]: 2024-07-02T01:50:10.949951Z INFO Daemon Daemon Successfully mounted dvd Jul 2 01:50:11.017829 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 01:50:11.056265 waagent[1553]: 2024-07-02T01:50:11.056107Z INFO Daemon Daemon Detect protocol endpoint Jul 2 01:50:11.062087 waagent[1553]: 2024-07-02T01:50:11.062002Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 01:50:11.067949 waagent[1553]: 2024-07-02T01:50:11.067872Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 01:50:11.074415 waagent[1553]: 2024-07-02T01:50:11.074346Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 01:50:11.079943 waagent[1553]: 2024-07-02T01:50:11.079879Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 01:50:11.085156 waagent[1553]: 2024-07-02T01:50:11.085094Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 01:50:11.253692 waagent[1553]: 2024-07-02T01:50:11.253620Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 01:50:11.260863 waagent[1553]: 2024-07-02T01:50:11.260818Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 01:50:11.266555 waagent[1553]: 2024-07-02T01:50:11.266488Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 01:50:11.850731 waagent[1553]: 2024-07-02T01:50:11.850564Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 01:50:11.865530 waagent[1553]: 2024-07-02T01:50:11.865453Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 2 01:50:11.871042 waagent[1553]: 2024-07-02T01:50:11.870975Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 2 01:50:11.949117 waagent[1553]: 2024-07-02T01:50:11.948986Z INFO Daemon Daemon Found private key matching thumbprint 7BB49B77886E69ECFE71A87ABD537418BE933108 Jul 2 01:50:11.957272 waagent[1553]: 2024-07-02T01:50:11.957203Z INFO Daemon Daemon Certificate with thumbprint 9D146C54DABB735A36F0B1A0AA773212C5F1DD0C has no matching private key. Jul 2 01:50:11.966246 waagent[1553]: 2024-07-02T01:50:11.966183Z INFO Daemon Daemon Fetch goal state completed Jul 2 01:50:11.984935 waagent[1553]: 2024-07-02T01:50:11.984877Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 2763fcd3-6ac5-4e76-9cbb-9089aa59f585 New eTag: 4917839847786940966] Jul 2 01:50:11.995109 waagent[1553]: 2024-07-02T01:50:11.995044Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 01:50:12.009515 waagent[1553]: 2024-07-02T01:50:12.009431Z INFO Daemon Daemon Starting provisioning Jul 2 01:50:12.014309 waagent[1553]: 2024-07-02T01:50:12.014243Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 01:50:12.018987 waagent[1553]: 2024-07-02T01:50:12.018927Z INFO Daemon Daemon Set hostname [ci-3510.3.5-a-267983ca13] Jul 2 01:50:12.087134 waagent[1553]: 2024-07-02T01:50:12.086997Z INFO Daemon Daemon Publish hostname [ci-3510.3.5-a-267983ca13] Jul 2 01:50:12.093351 waagent[1553]: 2024-07-02T01:50:12.093279Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 01:50:12.099583 waagent[1553]: 2024-07-02T01:50:12.099520Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 01:50:12.116901 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 2 01:50:12.117078 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 2 01:50:12.117138 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 2 01:50:12.117389 systemd[1]: Stopping systemd-networkd.service... Jul 2 01:50:12.121802 systemd-networkd[1206]: eth0: DHCPv6 lease lost Jul 2 01:50:12.123175 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 01:50:12.123360 systemd[1]: Stopped systemd-networkd.service. Jul 2 01:50:12.125323 systemd[1]: Starting systemd-networkd.service... Jul 2 01:50:12.153847 systemd-networkd[1604]: enP28344s1: Link UP Jul 2 01:50:12.154096 systemd-networkd[1604]: enP28344s1: Gained carrier Jul 2 01:50:12.155079 systemd-networkd[1604]: eth0: Link UP Jul 2 01:50:12.155173 systemd-networkd[1604]: eth0: Gained carrier Jul 2 01:50:12.155565 systemd-networkd[1604]: lo: Link UP Jul 2 01:50:12.155636 systemd-networkd[1604]: lo: Gained carrier Jul 2 01:50:12.155972 systemd-networkd[1604]: eth0: Gained IPv6LL Jul 2 01:50:12.157229 systemd-networkd[1604]: Enumeration completed Jul 2 01:50:12.157435 systemd[1]: Started systemd-networkd.service. Jul 2 01:50:12.159119 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 01:50:12.159433 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 01:50:12.164715 waagent[1553]: 2024-07-02T01:50:12.164432Z INFO Daemon Daemon Create user account if not exists Jul 2 01:50:12.170529 waagent[1553]: 2024-07-02T01:50:12.170424Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 01:50:12.176578 waagent[1553]: 2024-07-02T01:50:12.176481Z INFO Daemon Daemon Configure sudoer Jul 2 01:50:12.181619 waagent[1553]: 2024-07-02T01:50:12.181533Z INFO Daemon Daemon Configure sshd Jul 2 01:50:12.185661 waagent[1553]: 2024-07-02T01:50:12.185592Z INFO Daemon Daemon Deploy ssh public key. Jul 2 01:50:12.185848 systemd-networkd[1604]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 01:50:12.191309 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 01:50:13.442732 waagent[1553]: 2024-07-02T01:50:13.442659Z INFO Daemon Daemon Provisioning complete Jul 2 01:50:13.465373 waagent[1553]: 2024-07-02T01:50:13.465307Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 01:50:13.471793 waagent[1553]: 2024-07-02T01:50:13.471715Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 01:50:13.482281 waagent[1553]: 2024-07-02T01:50:13.482203Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 2 01:50:13.777965 waagent[1613]: 2024-07-02T01:50:13.777834Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 2 01:50:13.779025 waagent[1613]: 2024-07-02T01:50:13.778972Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:13.779253 waagent[1613]: 2024-07-02T01:50:13.779205Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:13.791325 waagent[1613]: 2024-07-02T01:50:13.791257Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 2 01:50:13.791576 waagent[1613]: 2024-07-02T01:50:13.791527Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 2 01:50:13.856554 waagent[1613]: 2024-07-02T01:50:13.856435Z INFO ExtHandler ExtHandler Found private key matching thumbprint 7BB49B77886E69ECFE71A87ABD537418BE933108 Jul 2 01:50:13.856928 waagent[1613]: 2024-07-02T01:50:13.856875Z INFO ExtHandler ExtHandler Certificate with thumbprint 9D146C54DABB735A36F0B1A0AA773212C5F1DD0C has no matching private key. Jul 2 01:50:13.857241 waagent[1613]: 2024-07-02T01:50:13.857193Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 2 01:50:13.869831 waagent[1613]: 2024-07-02T01:50:13.869782Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 7471dc85-fb54-46da-a8a6-524614601eea New eTag: 4917839847786940966] Jul 2 01:50:13.870445 waagent[1613]: 2024-07-02T01:50:13.870388Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 01:50:13.942875 waagent[1613]: 2024-07-02T01:50:13.942729Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 01:50:13.967003 waagent[1613]: 2024-07-02T01:50:13.966933Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1613 Jul 2 01:50:13.970846 waagent[1613]: 2024-07-02T01:50:13.970785Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 01:50:13.972274 waagent[1613]: 2024-07-02T01:50:13.972214Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 01:50:14.117445 waagent[1613]: 2024-07-02T01:50:14.117390Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 01:50:14.117999 waagent[1613]: 2024-07-02T01:50:14.117945Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 01:50:14.125486 waagent[1613]: 2024-07-02T01:50:14.125436Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 01:50:14.126104 waagent[1613]: 2024-07-02T01:50:14.126048Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 01:50:14.127339 waagent[1613]: 2024-07-02T01:50:14.127277Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 2 01:50:14.128774 waagent[1613]: 2024-07-02T01:50:14.128698Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 01:50:14.129046 waagent[1613]: 2024-07-02T01:50:14.128980Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:14.129556 waagent[1613]: 2024-07-02T01:50:14.129492Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:14.130132 waagent[1613]: 2024-07-02T01:50:14.130071Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 01:50:14.130437 waagent[1613]: 2024-07-02T01:50:14.130380Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 01:50:14.130437 waagent[1613]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 01:50:14.130437 waagent[1613]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 01:50:14.130437 waagent[1613]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 01:50:14.130437 waagent[1613]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:14.130437 waagent[1613]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:14.130437 waagent[1613]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:14.132419 waagent[1613]: 2024-07-02T01:50:14.132268Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 01:50:14.133208 waagent[1613]: 2024-07-02T01:50:14.133138Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:14.133379 waagent[1613]: 2024-07-02T01:50:14.133326Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:14.133942 waagent[1613]: 2024-07-02T01:50:14.133878Z INFO EnvHandler ExtHandler Configure routes Jul 2 01:50:14.134091 waagent[1613]: 2024-07-02T01:50:14.134046Z INFO EnvHandler ExtHandler Gateway:None Jul 2 01:50:14.134204 waagent[1613]: 2024-07-02T01:50:14.134163Z INFO EnvHandler ExtHandler Routes:None Jul 2 01:50:14.135058 waagent[1613]: 2024-07-02T01:50:14.135002Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 01:50:14.135211 waagent[1613]: 2024-07-02T01:50:14.135145Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 01:50:14.135911 waagent[1613]: 2024-07-02T01:50:14.135823Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 01:50:14.136087 waagent[1613]: 2024-07-02T01:50:14.136019Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 01:50:14.136366 waagent[1613]: 2024-07-02T01:50:14.136305Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 01:50:14.148493 waagent[1613]: 2024-07-02T01:50:14.148435Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 2 01:50:14.149157 waagent[1613]: 2024-07-02T01:50:14.149111Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 01:50:14.150116 waagent[1613]: 2024-07-02T01:50:14.150062Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 2 01:50:14.168978 waagent[1613]: 2024-07-02T01:50:14.168924Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 2 01:50:14.188158 waagent[1613]: 2024-07-02T01:50:14.188048Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1604' Jul 2 01:50:14.274070 waagent[1613]: 2024-07-02T01:50:14.273946Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 01:50:14.274070 waagent[1613]: Executing ['ip', '-a', '-o', 'link']: Jul 2 01:50:14.274070 waagent[1613]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 01:50:14.274070 waagent[1613]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:80 brd ff:ff:ff:ff:ff:ff Jul 2 01:50:14.274070 waagent[1613]: 3: enP28344s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:80 brd ff:ff:ff:ff:ff:ff\ altname enP28344p0s2 Jul 2 01:50:14.274070 waagent[1613]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 01:50:14.274070 waagent[1613]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 01:50:14.274070 waagent[1613]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 01:50:14.274070 waagent[1613]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 01:50:14.274070 waagent[1613]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 01:50:14.274070 waagent[1613]: 2: eth0 inet6 fe80::222:48ff:fe7c:da80/64 scope link \ valid_lft forever preferred_lft forever Jul 2 01:50:14.421715 waagent[1613]: 2024-07-02T01:50:14.421618Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.11.1.4 -- exiting Jul 2 01:50:14.485677 waagent[1553]: 2024-07-02T01:50:14.485566Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 2 01:50:14.490132 waagent[1553]: 2024-07-02T01:50:14.490079Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.11.1.4 to be the latest agent Jul 2 01:50:14.633836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 01:50:14.633954 systemd[1]: Stopped kubelet.service. Jul 2 01:50:14.635237 systemd[1]: Starting kubelet.service... Jul 2 01:50:14.766422 systemd[1]: Started kubelet.service. Jul 2 01:50:14.827773 kubelet[1649]: E0702 01:50:14.827718 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:14.830090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:14.830240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:15.665134 waagent[1642]: 2024-07-02T01:50:15.665045Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.11.1.4) Jul 2 01:50:15.666121 waagent[1642]: 2024-07-02T01:50:15.666066Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.5 Jul 2 01:50:15.666338 waagent[1642]: 2024-07-02T01:50:15.666291Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 2 01:50:15.666555 waagent[1642]: 2024-07-02T01:50:15.666511Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 2 01:50:15.674777 waagent[1642]: 2024-07-02T01:50:15.674659Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 01:50:15.675280 waagent[1642]: 2024-07-02T01:50:15.675227Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:15.675506 waagent[1642]: 2024-07-02T01:50:15.675460Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:15.688333 waagent[1642]: 2024-07-02T01:50:15.688264Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 01:50:15.697393 waagent[1642]: 2024-07-02T01:50:15.697343Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 01:50:15.698454 waagent[1642]: 2024-07-02T01:50:15.698398Z INFO ExtHandler Jul 2 01:50:15.698689 waagent[1642]: 2024-07-02T01:50:15.698640Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 27c562a8-c045-420f-8f4d-805966f40689 eTag: 4917839847786940966 source: Fabric] Jul 2 01:50:15.699519 waagent[1642]: 2024-07-02T01:50:15.699464Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 01:50:15.700846 waagent[1642]: 2024-07-02T01:50:15.700787Z INFO ExtHandler Jul 2 01:50:15.701080 waagent[1642]: 2024-07-02T01:50:15.701024Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 01:50:15.707419 waagent[1642]: 2024-07-02T01:50:15.707374Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 01:50:15.707980 waagent[1642]: 2024-07-02T01:50:15.707933Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 01:50:15.726698 waagent[1642]: 2024-07-02T01:50:15.726637Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 2 01:50:15.792810 waagent[1642]: 2024-07-02T01:50:15.792673Z INFO ExtHandler Downloaded certificate {'thumbprint': '9D146C54DABB735A36F0B1A0AA773212C5F1DD0C', 'hasPrivateKey': False} Jul 2 01:50:15.794002 waagent[1642]: 2024-07-02T01:50:15.793919Z INFO ExtHandler Downloaded certificate {'thumbprint': '7BB49B77886E69ECFE71A87ABD537418BE933108', 'hasPrivateKey': True} Jul 2 01:50:15.795195 waagent[1642]: 2024-07-02T01:50:15.795137Z INFO ExtHandler Fetch goal state completed Jul 2 01:50:15.817012 waagent[1642]: 2024-07-02T01:50:15.816921Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Jul 2 01:50:15.828829 waagent[1642]: 2024-07-02T01:50:15.828723Z INFO ExtHandler ExtHandler WALinuxAgent-2.11.1.4 running as process 1642 Jul 2 01:50:15.832425 waagent[1642]: 2024-07-02T01:50:15.832366Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 01:50:15.834012 waagent[1642]: 2024-07-02T01:50:15.833955Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 01:50:15.838721 waagent[1642]: 2024-07-02T01:50:15.838673Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 01:50:15.839218 waagent[1642]: 2024-07-02T01:50:15.839163Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 01:50:15.847222 waagent[1642]: 2024-07-02T01:50:15.847174Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 01:50:15.847841 waagent[1642]: 2024-07-02T01:50:15.847781Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 01:50:15.853636 waagent[1642]: 2024-07-02T01:50:15.853545Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 01:50:15.854743 waagent[1642]: 2024-07-02T01:50:15.854683Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 01:50:15.856326 waagent[1642]: 2024-07-02T01:50:15.856258Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 01:50:15.856601 waagent[1642]: 2024-07-02T01:50:15.856533Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:15.857179 waagent[1642]: 2024-07-02T01:50:15.857107Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:15.857794 waagent[1642]: 2024-07-02T01:50:15.857697Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 01:50:15.858106 waagent[1642]: 2024-07-02T01:50:15.858046Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 01:50:15.858106 waagent[1642]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 01:50:15.858106 waagent[1642]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 01:50:15.858106 waagent[1642]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 01:50:15.858106 waagent[1642]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:15.858106 waagent[1642]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:15.858106 waagent[1642]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 01:50:15.860376 waagent[1642]: 2024-07-02T01:50:15.860252Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 01:50:15.860603 waagent[1642]: 2024-07-02T01:50:15.860527Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 01:50:15.861250 waagent[1642]: 2024-07-02T01:50:15.861178Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 01:50:15.861836 waagent[1642]: 2024-07-02T01:50:15.861745Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 01:50:15.862007 waagent[1642]: 2024-07-02T01:50:15.861950Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 01:50:15.862576 waagent[1642]: 2024-07-02T01:50:15.862497Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 01:50:15.862819 waagent[1642]: 2024-07-02T01:50:15.862726Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 01:50:15.863052 waagent[1642]: 2024-07-02T01:50:15.862993Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 01:50:15.865958 waagent[1642]: 2024-07-02T01:50:15.865897Z INFO EnvHandler ExtHandler Configure routes Jul 2 01:50:15.868640 waagent[1642]: 2024-07-02T01:50:15.868573Z INFO EnvHandler ExtHandler Gateway:None Jul 2 01:50:15.871649 waagent[1642]: 2024-07-02T01:50:15.871510Z INFO EnvHandler ExtHandler Routes:None Jul 2 01:50:15.878820 waagent[1642]: 2024-07-02T01:50:15.878715Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 01:50:15.878820 waagent[1642]: Executing ['ip', '-a', '-o', 'link']: Jul 2 01:50:15.878820 waagent[1642]: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 01:50:15.878820 waagent[1642]: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:80 brd ff:ff:ff:ff:ff:ff Jul 2 01:50:15.878820 waagent[1642]: 3: enP28344s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:da:80 brd ff:ff:ff:ff:ff:ff\ altname enP28344p0s2 Jul 2 01:50:15.878820 waagent[1642]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 01:50:15.878820 waagent[1642]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 01:50:15.878820 waagent[1642]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 01:50:15.878820 waagent[1642]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 01:50:15.878820 waagent[1642]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 01:50:15.878820 waagent[1642]: 2: eth0 inet6 fe80::222:48ff:fe7c:da80/64 scope link \ valid_lft forever preferred_lft forever Jul 2 01:50:15.890170 waagent[1642]: 2024-07-02T01:50:15.890086Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 2 01:50:15.902198 waagent[1642]: 2024-07-02T01:50:15.902133Z INFO ExtHandler ExtHandler Jul 2 01:50:15.902357 waagent[1642]: 2024-07-02T01:50:15.902302Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 45a639a1-7251-4d69-8562-ccd639fdafbd correlation 43137e91-ccad-4e9c-8de9-3001d96289cb created: 2024-07-02T01:48:28.973122Z] Jul 2 01:50:15.903250 waagent[1642]: 2024-07-02T01:50:15.903190Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 01:50:15.905114 waagent[1642]: 2024-07-02T01:50:15.905059Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jul 2 01:50:15.926065 waagent[1642]: 2024-07-02T01:50:15.925977Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 2 01:50:15.963393 waagent[1642]: 2024-07-02T01:50:15.963312Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.11.1.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4970C3E3-70D4-4882-B3F8-3790350E150F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Jul 2 01:50:16.155856 waagent[1642]: 2024-07-02T01:50:16.155715Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 2 01:50:16.155856 waagent[1642]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:16.155856 waagent[1642]: pkts bytes target prot opt in out source destination Jul 2 01:50:16.155856 waagent[1642]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:16.155856 waagent[1642]: pkts bytes target prot opt in out source destination Jul 2 01:50:16.155856 waagent[1642]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:16.155856 waagent[1642]: pkts bytes target prot opt in out source destination Jul 2 01:50:16.155856 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 01:50:16.155856 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 01:50:16.155856 waagent[1642]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 01:50:16.163292 waagent[1642]: 2024-07-02T01:50:16.163195Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 01:50:16.163292 waagent[1642]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:16.163292 waagent[1642]: pkts bytes target prot opt in out source destination Jul 2 01:50:16.163292 waagent[1642]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:16.163292 waagent[1642]: pkts bytes target prot opt in out source destination Jul 2 01:50:16.163292 waagent[1642]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 01:50:16.163292 waagent[1642]: pkts bytes target prot opt in out source destination Jul 2 01:50:16.163292 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 01:50:16.163292 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 01:50:16.163292 waagent[1642]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 01:50:16.164077 waagent[1642]: 2024-07-02T01:50:16.164031Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 01:50:25.063255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 01:50:25.063428 systemd[1]: Stopped kubelet.service. Jul 2 01:50:25.064780 systemd[1]: Starting kubelet.service... Jul 2 01:50:25.251401 systemd[1]: Started kubelet.service. Jul 2 01:50:25.287090 kubelet[1709]: E0702 01:50:25.287017 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:25.289468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:25.289616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:35.313374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 01:50:35.313539 systemd[1]: Stopped kubelet.service. Jul 2 01:50:35.314899 systemd[1]: Starting kubelet.service... Jul 2 01:50:35.509649 systemd[1]: Started kubelet.service. Jul 2 01:50:35.555650 kubelet[1719]: E0702 01:50:35.555590 1719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:35.558252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:35.558377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:35.716628 systemd[1]: Created slice system-sshd.slice. Jul 2 01:50:35.718258 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:55354.service. Jul 2 01:50:36.469279 sshd[1727]: Accepted publickey for core from 10.200.16.10 port 55354 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:36.487180 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:36.491562 systemd[1]: Started session-3.scope. Jul 2 01:50:36.492636 systemd-logind[1438]: New session 3 of user core. Jul 2 01:50:36.878635 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:55356.service. Jul 2 01:50:37.304300 sshd[1732]: Accepted publickey for core from 10.200.16.10 port 55356 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:37.305892 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:37.309538 systemd-logind[1438]: New session 4 of user core. Jul 2 01:50:37.309848 systemd[1]: Started session-4.scope. Jul 2 01:50:37.615131 sshd[1732]: pam_unix(sshd:session): session closed for user core Jul 2 01:50:37.617314 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:55356.service: Deactivated successfully. Jul 2 01:50:37.618028 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 01:50:37.619003 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Jul 2 01:50:37.620324 systemd-logind[1438]: Removed session 4. Jul 2 01:50:37.692471 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:55362.service. Jul 2 01:50:38.158521 sshd[1738]: Accepted publickey for core from 10.200.16.10 port 55362 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:38.159807 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:38.163512 systemd-logind[1438]: New session 5 of user core. Jul 2 01:50:38.163979 systemd[1]: Started session-5.scope. Jul 2 01:50:38.501848 sshd[1738]: pam_unix(sshd:session): session closed for user core Jul 2 01:50:38.504620 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:55362.service: Deactivated successfully. Jul 2 01:50:38.505312 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 01:50:38.505844 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Jul 2 01:50:38.506634 systemd-logind[1438]: Removed session 5. Jul 2 01:50:38.581837 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:32850.service. Jul 2 01:50:39.047675 sshd[1744]: Accepted publickey for core from 10.200.16.10 port 32850 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:39.050527 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:39.054689 systemd[1]: Started session-6.scope. Jul 2 01:50:39.055615 systemd-logind[1438]: New session 6 of user core. Jul 2 01:50:39.396567 sshd[1744]: pam_unix(sshd:session): session closed for user core Jul 2 01:50:39.399238 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:32850.service: Deactivated successfully. Jul 2 01:50:39.399942 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 01:50:39.400467 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Jul 2 01:50:39.401317 systemd-logind[1438]: Removed session 6. Jul 2 01:50:39.468337 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:32856.service. Jul 2 01:50:39.900108 sshd[1750]: Accepted publickey for core from 10.200.16.10 port 32856 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:50:39.901627 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:50:39.905746 systemd[1]: Started session-7.scope. Jul 2 01:50:39.906820 systemd-logind[1438]: New session 7 of user core. Jul 2 01:50:40.390564 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 01:50:40.390796 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 01:50:40.424989 systemd[1]: Starting docker.service... Jul 2 01:50:40.460279 env[1763]: time="2024-07-02T01:50:40.460227536Z" level=info msg="Starting up" Jul 2 01:50:40.461623 env[1763]: time="2024-07-02T01:50:40.461592297Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 01:50:40.461740 env[1763]: time="2024-07-02T01:50:40.461726133Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 01:50:40.461881 env[1763]: time="2024-07-02T01:50:40.461862489Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Jul 2 01:50:40.462131 env[1763]: time="2024-07-02T01:50:40.462113921Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 01:50:40.463718 env[1763]: time="2024-07-02T01:50:40.463696036Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 01:50:40.463854 env[1763]: time="2024-07-02T01:50:40.463837511Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 01:50:40.463921 env[1763]: time="2024-07-02T01:50:40.463907949Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Jul 2 01:50:40.463977 env[1763]: time="2024-07-02T01:50:40.463964788Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 01:50:40.469314 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3200644649-merged.mount: Deactivated successfully. Jul 2 01:50:40.549950 env[1763]: time="2024-07-02T01:50:40.549910213Z" level=info msg="Loading containers: start." Jul 2 01:50:40.717781 kernel: Initializing XFRM netlink socket Jul 2 01:50:40.747468 env[1763]: time="2024-07-02T01:50:40.747417120Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 01:50:40.872392 systemd-networkd[1604]: docker0: Link UP Jul 2 01:50:40.890618 env[1763]: time="2024-07-02T01:50:40.890582845Z" level=info msg="Loading containers: done." Jul 2 01:50:40.901047 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1284964426-merged.mount: Deactivated successfully. Jul 2 01:50:40.914038 env[1763]: time="2024-07-02T01:50:40.913992365Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 01:50:40.914453 env[1763]: time="2024-07-02T01:50:40.914431233Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 01:50:40.914646 env[1763]: time="2024-07-02T01:50:40.914630267Z" level=info msg="Daemon has completed initialization" Jul 2 01:50:40.942207 systemd[1]: Started docker.service. Jul 2 01:50:40.949647 env[1763]: time="2024-07-02T01:50:40.949573373Z" level=info msg="API listen on /run/docker.sock" Jul 2 01:50:42.312685 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 2 01:50:45.189550 env[1451]: time="2024-07-02T01:50:45.189274434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 01:50:45.563258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 01:50:45.563423 systemd[1]: Stopped kubelet.service. Jul 2 01:50:45.564801 systemd[1]: Starting kubelet.service... Jul 2 01:50:45.973490 systemd[1]: Started kubelet.service. Jul 2 01:50:46.011642 kubelet[1887]: E0702 01:50:46.011580 1887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:46.013901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:46.014025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:46.487827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028248041.mount: Deactivated successfully. Jul 2 01:50:48.318872 env[1451]: time="2024-07-02T01:50:48.318820967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.325700 env[1451]: time="2024-07-02T01:50:48.325650489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.328885 env[1451]: time="2024-07-02T01:50:48.328859073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.333537 env[1451]: time="2024-07-02T01:50:48.333512313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:48.334178 env[1451]: time="2024-07-02T01:50:48.334150862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 01:50:48.342963 env[1451]: time="2024-07-02T01:50:48.342926510Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 01:50:48.610882 update_engine[1440]: I0702 01:50:48.610282 1440 update_attempter.cc:509] Updating boot flags... Jul 2 01:50:50.736742 env[1451]: time="2024-07-02T01:50:50.736669076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.743968 env[1451]: time="2024-07-02T01:50:50.743929885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.748193 env[1451]: time="2024-07-02T01:50:50.748168501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.753054 env[1451]: time="2024-07-02T01:50:50.753029467Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:50.753690 env[1451]: time="2024-07-02T01:50:50.753664257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 01:50:50.762282 env[1451]: time="2024-07-02T01:50:50.762245606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 01:50:52.360459 env[1451]: time="2024-07-02T01:50:52.360405173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.366031 env[1451]: time="2024-07-02T01:50:52.365966379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.372167 env[1451]: time="2024-07-02T01:50:52.372119136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.375615 env[1451]: time="2024-07-02T01:50:52.375577330Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:52.376287 env[1451]: time="2024-07-02T01:50:52.376255361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 01:50:52.385346 env[1451]: time="2024-07-02T01:50:52.385305560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 01:50:54.009632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314190065.mount: Deactivated successfully. Jul 2 01:50:54.792478 env[1451]: time="2024-07-02T01:50:54.792424074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.800425 env[1451]: time="2024-07-02T01:50:54.800375220Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.804404 env[1451]: time="2024-07-02T01:50:54.804366773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.808282 env[1451]: time="2024-07-02T01:50:54.808252527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:54.808594 env[1451]: time="2024-07-02T01:50:54.808567444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 01:50:54.817059 env[1451]: time="2024-07-02T01:50:54.817028744Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 01:50:55.415610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248876138.mount: Deactivated successfully. Jul 2 01:50:55.447289 env[1451]: time="2024-07-02T01:50:55.447238016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.455316 env[1451]: time="2024-07-02T01:50:55.455280047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.459336 env[1451]: time="2024-07-02T01:50:55.459309883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.464291 env[1451]: time="2024-07-02T01:50:55.464252868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:50:55.464886 env[1451]: time="2024-07-02T01:50:55.464858662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 01:50:55.472748 env[1451]: time="2024-07-02T01:50:55.472717375Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 01:50:56.063238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 01:50:56.063413 systemd[1]: Stopped kubelet.service. Jul 2 01:50:56.064781 systemd[1]: Starting kubelet.service... Jul 2 01:50:56.174652 systemd[1]: Started kubelet.service. Jul 2 01:50:56.215650 kubelet[1967]: E0702 01:50:56.215590 1967 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:50:56.217739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:50:56.217881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:50:56.586140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708450943.mount: Deactivated successfully. Jul 2 01:51:00.703930 env[1451]: time="2024-07-02T01:51:00.703885697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:00.714362 env[1451]: time="2024-07-02T01:51:00.714330305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:00.718739 env[1451]: time="2024-07-02T01:51:00.718701938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:00.722951 env[1451]: time="2024-07-02T01:51:00.722911893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:00.723710 env[1451]: time="2024-07-02T01:51:00.723684525Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 01:51:00.732425 env[1451]: time="2024-07-02T01:51:00.732393872Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 01:51:01.378671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226372398.mount: Deactivated successfully. Jul 2 01:51:02.523417 env[1451]: time="2024-07-02T01:51:02.523367178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:02.532913 env[1451]: time="2024-07-02T01:51:02.532878593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:02.538535 env[1451]: time="2024-07-02T01:51:02.538509059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:02.544013 env[1451]: time="2024-07-02T01:51:02.543975289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:02.544610 env[1451]: time="2024-07-02T01:51:02.544580835Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 01:51:06.313292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 01:51:06.313478 systemd[1]: Stopped kubelet.service. Jul 2 01:51:06.314945 systemd[1]: Starting kubelet.service... Jul 2 01:51:06.606491 systemd[1]: Started kubelet.service. Jul 2 01:51:06.663974 kubelet[2042]: E0702 01:51:06.663927 2042 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 01:51:06.666098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 01:51:06.666225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 01:51:07.706475 systemd[1]: Stopped kubelet.service. Jul 2 01:51:07.708482 systemd[1]: Starting kubelet.service... Jul 2 01:51:07.735952 systemd[1]: Reloading. Jul 2 01:51:07.813235 /usr/lib/systemd/system-generators/torcx-generator[2073]: time="2024-07-02T01:51:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:51:07.814412 /usr/lib/systemd/system-generators/torcx-generator[2073]: time="2024-07-02T01:51:07Z" level=info msg="torcx already run" Jul 2 01:51:07.887811 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:51:07.887967 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:51:07.903511 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:51:08.002557 systemd[1]: Started kubelet.service. Jul 2 01:51:08.004209 systemd[1]: Stopping kubelet.service... Jul 2 01:51:08.004437 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 01:51:08.004597 systemd[1]: Stopped kubelet.service. Jul 2 01:51:08.005947 systemd[1]: Starting kubelet.service... Jul 2 01:51:08.142396 systemd[1]: Started kubelet.service. Jul 2 01:51:08.195846 kubelet[2140]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:08.195846 kubelet[2140]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 01:51:08.195846 kubelet[2140]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:08.196188 kubelet[2140]: I0702 01:51:08.195891 2140 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 01:51:09.305084 kubelet[2140]: I0702 01:51:09.305049 2140 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 01:51:09.305084 kubelet[2140]: I0702 01:51:09.305079 2140 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 01:51:09.305574 kubelet[2140]: I0702 01:51:09.305537 2140 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 01:51:09.325603 kubelet[2140]: E0702 01:51:09.325580 2140 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.328238 kubelet[2140]: I0702 01:51:09.328223 2140 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 01:51:09.339198 kubelet[2140]: W0702 01:51:09.339179 2140 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 01:51:09.339812 kubelet[2140]: I0702 01:51:09.339799 2140 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 01:51:09.340107 kubelet[2140]: I0702 01:51:09.340097 2140 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 01:51:09.340323 kubelet[2140]: I0702 01:51:09.340305 2140 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 01:51:09.340451 kubelet[2140]: I0702 01:51:09.340441 2140 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 01:51:09.340510 kubelet[2140]: I0702 01:51:09.340502 2140 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 01:51:09.340659 kubelet[2140]: I0702 01:51:09.340648 2140 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:09.342540 kubelet[2140]: I0702 01:51:09.342525 2140 kubelet.go:393] "Attempting to sync node with API server" Jul 2 01:51:09.342630 kubelet[2140]: I0702 01:51:09.342620 2140 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 01:51:09.342708 kubelet[2140]: I0702 01:51:09.342699 2140 kubelet.go:309] "Adding apiserver pod source" Jul 2 01:51:09.342798 kubelet[2140]: I0702 01:51:09.342786 2140 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 01:51:09.344130 kubelet[2140]: W0702 01:51:09.344082 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-267983ca13&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.344262 kubelet[2140]: E0702 01:51:09.344250 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-267983ca13&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.344520 kubelet[2140]: W0702 01:51:09.344478 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.344621 kubelet[2140]: E0702 01:51:09.344610 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.344779 kubelet[2140]: I0702 01:51:09.344766 2140 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 01:51:09.351108 kubelet[2140]: W0702 01:51:09.351091 2140 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 01:51:09.351891 kubelet[2140]: I0702 01:51:09.351876 2140 server.go:1232] "Started kubelet" Jul 2 01:51:09.354692 kubelet[2140]: E0702 01:51:09.354614 2140 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.5-a-267983ca13.17de425bb5370e51", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.5-a-267983ca13", UID:"ci-3510.3.5-a-267983ca13", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-267983ca13"}, FirstTimestamp:time.Date(2024, time.July, 2, 1, 51, 9, 351849553, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 1, 51, 9, 351849553, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-267983ca13"}': 'Post "https://10.200.20.40:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.40:6443: connect: connection refused'(may retry after sleeping) Jul 2 01:51:09.354977 kubelet[2140]: E0702 01:51:09.354962 2140 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 01:51:09.355090 kubelet[2140]: E0702 01:51:09.355079 2140 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 01:51:09.361720 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 01:51:09.361886 kubelet[2140]: I0702 01:51:09.361860 2140 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 01:51:09.363164 kubelet[2140]: I0702 01:51:09.363149 2140 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 01:51:09.364075 kubelet[2140]: I0702 01:51:09.364056 2140 server.go:462] "Adding debug handlers to kubelet server" Jul 2 01:51:09.364747 kubelet[2140]: I0702 01:51:09.364717 2140 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 01:51:09.365683 kubelet[2140]: I0702 01:51:09.365623 2140 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 01:51:09.365857 kubelet[2140]: I0702 01:51:09.365839 2140 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 01:51:09.366628 kubelet[2140]: I0702 01:51:09.366608 2140 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 01:51:09.366783 kubelet[2140]: E0702 01:51:09.366743 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-267983ca13?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Jul 2 01:51:09.367017 kubelet[2140]: I0702 01:51:09.367000 2140 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 01:51:09.367127 kubelet[2140]: W0702 01:51:09.366628 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.367949 kubelet[2140]: E0702 01:51:09.367917 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.434627 kubelet[2140]: I0702 01:51:09.434583 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 01:51:09.436299 kubelet[2140]: I0702 01:51:09.436121 2140 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 01:51:09.436299 kubelet[2140]: I0702 01:51:09.436141 2140 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 01:51:09.436299 kubelet[2140]: I0702 01:51:09.436157 2140 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 01:51:09.436299 kubelet[2140]: E0702 01:51:09.436197 2140 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 01:51:09.437304 kubelet[2140]: W0702 01:51:09.437139 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.437304 kubelet[2140]: E0702 01:51:09.437169 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:09.437716 kubelet[2140]: I0702 01:51:09.437690 2140 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 01:51:09.437716 kubelet[2140]: I0702 01:51:09.437715 2140 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 01:51:09.437816 kubelet[2140]: I0702 01:51:09.437732 2140 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:09.444888 kubelet[2140]: I0702 01:51:09.444850 2140 policy_none.go:49] "None policy: Start" Jul 2 01:51:09.445489 kubelet[2140]: I0702 01:51:09.445463 2140 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 01:51:09.445489 kubelet[2140]: I0702 01:51:09.445490 2140 state_mem.go:35] "Initializing new in-memory state store" Jul 2 01:51:09.453522 systemd[1]: Created slice kubepods.slice. Jul 2 01:51:09.458302 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 01:51:09.460857 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 01:51:09.466678 kubelet[2140]: I0702 01:51:09.466660 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.467496 kubelet[2140]: I0702 01:51:09.467465 2140 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 01:51:09.467723 kubelet[2140]: E0702 01:51:09.467708 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.467819 kubelet[2140]: I0702 01:51:09.467716 2140 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 01:51:09.468658 kubelet[2140]: E0702 01:51:09.468643 2140 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-267983ca13\" not found" Jul 2 01:51:09.536868 kubelet[2140]: I0702 01:51:09.536843 2140 topology_manager.go:215] "Topology Admit Handler" podUID="f85d28cd6ab984b39bec3be85cc2e062" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.538455 kubelet[2140]: I0702 01:51:09.538437 2140 topology_manager.go:215] "Topology Admit Handler" podUID="d217b14d6d363ca0ff02a92a8da4c8a6" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.539661 kubelet[2140]: I0702 01:51:09.539631 2140 topology_manager.go:215] "Topology Admit Handler" podUID="06d89e0ebe90815a05dd039fad2e4dd8" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.544471 systemd[1]: Created slice kubepods-burstable-podf85d28cd6ab984b39bec3be85cc2e062.slice. Jul 2 01:51:09.560366 systemd[1]: Created slice kubepods-burstable-podd217b14d6d363ca0ff02a92a8da4c8a6.slice. Jul 2 01:51:09.566133 kubelet[2140]: I0702 01:51:09.566108 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85d28cd6ab984b39bec3be85cc2e062-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-267983ca13\" (UID: \"f85d28cd6ab984b39bec3be85cc2e062\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566214 kubelet[2140]: I0702 01:51:09.566188 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85d28cd6ab984b39bec3be85cc2e062-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-267983ca13\" (UID: \"f85d28cd6ab984b39bec3be85cc2e062\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566248 kubelet[2140]: I0702 01:51:09.566229 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566274 kubelet[2140]: I0702 01:51:09.566253 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566331 kubelet[2140]: I0702 01:51:09.566315 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566366 kubelet[2140]: I0702 01:51:09.566343 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85d28cd6ab984b39bec3be85cc2e062-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-267983ca13\" (UID: \"f85d28cd6ab984b39bec3be85cc2e062\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566427 kubelet[2140]: I0702 01:51:09.566389 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566427 kubelet[2140]: I0702 01:51:09.566409 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.566477 kubelet[2140]: I0702 01:51:09.566467 2140 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06d89e0ebe90815a05dd039fad2e4dd8-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-267983ca13\" (UID: \"06d89e0ebe90815a05dd039fad2e4dd8\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.567389 kubelet[2140]: E0702 01:51:09.567370 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-267983ca13?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Jul 2 01:51:09.570372 systemd[1]: Created slice kubepods-burstable-pod06d89e0ebe90815a05dd039fad2e4dd8.slice. Jul 2 01:51:09.669449 kubelet[2140]: I0702 01:51:09.669419 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.669814 kubelet[2140]: E0702 01:51:09.669799 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:09.859399 env[1451]: time="2024-07-02T01:51:09.859097967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-267983ca13,Uid:f85d28cd6ab984b39bec3be85cc2e062,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:09.863747 env[1451]: time="2024-07-02T01:51:09.863630358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-267983ca13,Uid:d217b14d6d363ca0ff02a92a8da4c8a6,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:09.872999 env[1451]: time="2024-07-02T01:51:09.872797779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-267983ca13,Uid:06d89e0ebe90815a05dd039fad2e4dd8,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:09.968521 kubelet[2140]: E0702 01:51:09.968471 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-267983ca13?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Jul 2 01:51:10.072365 kubelet[2140]: I0702 01:51:10.072332 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:10.072683 kubelet[2140]: E0702 01:51:10.072665 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:10.331210 kubelet[2140]: W0702 01:51:10.331152 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-267983ca13&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:10.331210 kubelet[2140]: E0702 01:51:10.331214 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-267983ca13&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:10.468985 kubelet[2140]: W0702 01:51:10.468930 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:10.468985 kubelet[2140]: E0702 01:51:10.468989 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:11.069824 kubelet[2140]: E0702 01:51:10.769515 2140 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-267983ca13?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Jul 2 01:51:11.069824 kubelet[2140]: W0702 01:51:10.801122 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:11.069824 kubelet[2140]: E0702 01:51:10.801150 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:11.069824 kubelet[2140]: I0702 01:51:10.874911 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:11.069824 kubelet[2140]: E0702 01:51:10.875172 2140 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:11.069824 kubelet[2140]: W0702 01:51:10.895877 2140 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:11.069824 kubelet[2140]: E0702 01:51:10.895903 2140 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:11.330355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719118044.mount: Deactivated successfully. Jul 2 01:51:11.354363 env[1451]: time="2024-07-02T01:51:11.354325853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.371384 env[1451]: time="2024-07-02T01:51:11.371332298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.375158 env[1451]: time="2024-07-02T01:51:11.375116948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.383861 env[1451]: time="2024-07-02T01:51:11.383811068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.389768 env[1451]: time="2024-07-02T01:51:11.389717678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.393127 env[1451]: time="2024-07-02T01:51:11.393100096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.396624 env[1451]: time="2024-07-02T01:51:11.396582791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.399767 env[1451]: time="2024-07-02T01:51:11.399712854Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.403450 env[1451]: time="2024-07-02T01:51:11.403414745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.406231 env[1451]: time="2024-07-02T01:51:11.406195894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.412444 env[1451]: time="2024-07-02T01:51:11.412407739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.420859 env[1451]: time="2024-07-02T01:51:11.420831783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:11.476817 env[1451]: time="2024-07-02T01:51:11.476732789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:11.477716 env[1451]: time="2024-07-02T01:51:11.476793588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:11.477716 env[1451]: time="2024-07-02T01:51:11.476804588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:11.477716 env[1451]: time="2024-07-02T01:51:11.477005944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/998674208de35188f397262ffc745058e3a90e45560dec3a9c097d6d643da1e9 pid=2178 runtime=io.containerd.runc.v2 Jul 2 01:51:11.486522 kubelet[2140]: E0702 01:51:11.486482 2140 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Jul 2 01:51:11.492582 systemd[1]: Started cri-containerd-998674208de35188f397262ffc745058e3a90e45560dec3a9c097d6d643da1e9.scope. Jul 2 01:51:11.516813 env[1451]: time="2024-07-02T01:51:11.515986463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:11.516813 env[1451]: time="2024-07-02T01:51:11.516030142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:11.516813 env[1451]: time="2024-07-02T01:51:11.516040222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:11.516813 env[1451]: time="2024-07-02T01:51:11.516140860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c pid=2212 runtime=io.containerd.runc.v2 Jul 2 01:51:11.520239 env[1451]: time="2024-07-02T01:51:11.520153626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:11.520239 env[1451]: time="2024-07-02T01:51:11.520204905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:11.520370 env[1451]: time="2024-07-02T01:51:11.520231305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:11.520415 env[1451]: time="2024-07-02T01:51:11.520376022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc pid=2226 runtime=io.containerd.runc.v2 Jul 2 01:51:11.533487 systemd[1]: Started cri-containerd-94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c.scope. Jul 2 01:51:11.544714 systemd[1]: Started cri-containerd-ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc.scope. Jul 2 01:51:11.561427 env[1451]: time="2024-07-02T01:51:11.561394263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-267983ca13,Uid:f85d28cd6ab984b39bec3be85cc2e062,Namespace:kube-system,Attempt:0,} returns sandbox id \"998674208de35188f397262ffc745058e3a90e45560dec3a9c097d6d643da1e9\"" Jul 2 01:51:11.566653 env[1451]: time="2024-07-02T01:51:11.566610807Z" level=info msg="CreateContainer within sandbox \"998674208de35188f397262ffc745058e3a90e45560dec3a9c097d6d643da1e9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 01:51:11.583534 env[1451]: time="2024-07-02T01:51:11.582722869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-267983ca13,Uid:d217b14d6d363ca0ff02a92a8da4c8a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c\"" Jul 2 01:51:11.584825 env[1451]: time="2024-07-02T01:51:11.584800671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-267983ca13,Uid:06d89e0ebe90815a05dd039fad2e4dd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc\"" Jul 2 01:51:11.588879 env[1451]: time="2024-07-02T01:51:11.588839436Z" level=info msg="CreateContainer within sandbox \"94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 01:51:11.589391 env[1451]: time="2024-07-02T01:51:11.589366186Z" level=info msg="CreateContainer within sandbox \"ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 01:51:11.614851 env[1451]: time="2024-07-02T01:51:11.614793876Z" level=info msg="CreateContainer within sandbox \"998674208de35188f397262ffc745058e3a90e45560dec3a9c097d6d643da1e9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"18f7839fa5abca274fe0a1840bd208502d8edd3f36ff6af04954a080f8e4516c\"" Jul 2 01:51:11.615906 env[1451]: time="2024-07-02T01:51:11.615871816Z" level=info msg="StartContainer for \"18f7839fa5abca274fe0a1840bd208502d8edd3f36ff6af04954a080f8e4516c\"" Jul 2 01:51:11.631861 systemd[1]: Started cri-containerd-18f7839fa5abca274fe0a1840bd208502d8edd3f36ff6af04954a080f8e4516c.scope. Jul 2 01:51:11.672471 env[1451]: time="2024-07-02T01:51:11.672417450Z" level=info msg="CreateContainer within sandbox \"ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624\"" Jul 2 01:51:11.673430 env[1451]: time="2024-07-02T01:51:11.673401992Z" level=info msg="StartContainer for \"82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624\"" Jul 2 01:51:11.679033 env[1451]: time="2024-07-02T01:51:11.678993409Z" level=info msg="StartContainer for \"18f7839fa5abca274fe0a1840bd208502d8edd3f36ff6af04954a080f8e4516c\" returns successfully" Jul 2 01:51:11.679607 env[1451]: time="2024-07-02T01:51:11.679566358Z" level=info msg="CreateContainer within sandbox \"94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f\"" Jul 2 01:51:11.680294 env[1451]: time="2024-07-02T01:51:11.680019190Z" level=info msg="StartContainer for \"e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f\"" Jul 2 01:51:11.693239 systemd[1]: Started cri-containerd-82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624.scope. Jul 2 01:51:11.706858 systemd[1]: Started cri-containerd-e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f.scope. Jul 2 01:51:11.761455 env[1451]: time="2024-07-02T01:51:11.761407324Z" level=info msg="StartContainer for \"e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f\" returns successfully" Jul 2 01:51:11.769971 env[1451]: time="2024-07-02T01:51:11.769923887Z" level=info msg="StartContainer for \"82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624\" returns successfully" Jul 2 01:51:12.477239 kubelet[2140]: I0702 01:51:12.477205 2140 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:14.536655 kubelet[2140]: E0702 01:51:14.536602 2140 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-267983ca13\" not found" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:14.621690 kubelet[2140]: I0702 01:51:14.621647 2140 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:15.349205 kubelet[2140]: I0702 01:51:15.349165 2140 apiserver.go:52] "Watching apiserver" Jul 2 01:51:15.366473 kubelet[2140]: I0702 01:51:15.366423 2140 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 01:51:17.274706 systemd[1]: Reloading. Jul 2 01:51:17.364593 /usr/lib/systemd/system-generators/torcx-generator[2437]: time="2024-07-02T01:51:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 01:51:17.364974 /usr/lib/systemd/system-generators/torcx-generator[2437]: time="2024-07-02T01:51:17Z" level=info msg="torcx already run" Jul 2 01:51:17.402870 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 01:51:17.402886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 01:51:17.419234 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 01:51:17.532280 kubelet[2140]: I0702 01:51:17.531826 2140 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 01:51:17.532356 systemd[1]: Stopping kubelet.service... Jul 2 01:51:17.553300 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 01:51:17.553513 systemd[1]: Stopped kubelet.service. Jul 2 01:51:17.553565 systemd[1]: kubelet.service: Consumed 1.531s CPU time. Jul 2 01:51:17.555743 systemd[1]: Starting kubelet.service... Jul 2 01:51:17.637414 systemd[1]: Started kubelet.service. Jul 2 01:51:17.694564 kubelet[2492]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:17.694564 kubelet[2492]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 01:51:17.694564 kubelet[2492]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 01:51:17.694564 kubelet[2492]: I0702 01:51:17.694419 2492 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 01:51:17.700578 kubelet[2492]: I0702 01:51:17.700550 2492 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 01:51:17.700994 kubelet[2492]: I0702 01:51:17.700978 2492 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 01:51:17.701319 kubelet[2492]: I0702 01:51:17.701302 2492 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 01:51:17.703587 kubelet[2492]: I0702 01:51:17.703567 2492 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 01:51:17.704783 kubelet[2492]: I0702 01:51:17.704722 2492 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 01:51:17.713482 kubelet[2492]: W0702 01:51:17.713458 2492 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 01:51:17.714392 kubelet[2492]: I0702 01:51:17.714374 2492 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 01:51:17.714821 kubelet[2492]: I0702 01:51:17.714806 2492 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 01:51:17.715097 kubelet[2492]: I0702 01:51:17.715078 2492 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 01:51:17.715241 kubelet[2492]: I0702 01:51:17.715229 2492 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 01:51:17.715305 kubelet[2492]: I0702 01:51:17.715296 2492 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 01:51:17.715401 kubelet[2492]: I0702 01:51:17.715391 2492 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:17.715576 kubelet[2492]: I0702 01:51:17.715566 2492 kubelet.go:393] "Attempting to sync node with API server" Jul 2 01:51:17.721952 kubelet[2492]: I0702 01:51:17.717870 2492 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 01:51:17.721952 kubelet[2492]: I0702 01:51:17.717916 2492 kubelet.go:309] "Adding apiserver pod source" Jul 2 01:51:17.721952 kubelet[2492]: I0702 01:51:17.717931 2492 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 01:51:17.722972 kubelet[2492]: I0702 01:51:17.722946 2492 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 01:51:17.723443 kubelet[2492]: I0702 01:51:17.723417 2492 server.go:1232] "Started kubelet" Jul 2 01:51:17.729766 kubelet[2492]: I0702 01:51:17.724938 2492 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 01:51:17.738732 kubelet[2492]: E0702 01:51:17.738710 2492 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 01:51:17.738902 kubelet[2492]: E0702 01:51:17.738889 2492 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 01:51:17.740295 kubelet[2492]: I0702 01:51:17.740274 2492 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 01:51:17.741611 kubelet[2492]: I0702 01:51:17.741591 2492 server.go:462] "Adding debug handlers to kubelet server" Jul 2 01:51:17.745101 kubelet[2492]: I0702 01:51:17.745082 2492 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 01:51:17.745365 kubelet[2492]: I0702 01:51:17.745351 2492 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 01:51:17.748702 kubelet[2492]: I0702 01:51:17.748679 2492 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 01:51:17.758180 kubelet[2492]: I0702 01:51:17.758002 2492 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 01:51:17.759184 kubelet[2492]: I0702 01:51:17.759167 2492 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 01:51:17.761485 kubelet[2492]: I0702 01:51:17.761468 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 01:51:17.762404 kubelet[2492]: I0702 01:51:17.762388 2492 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 01:51:17.762545 kubelet[2492]: I0702 01:51:17.762534 2492 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 01:51:17.762615 kubelet[2492]: I0702 01:51:17.762606 2492 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 01:51:17.762708 kubelet[2492]: E0702 01:51:17.762699 2492 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 01:51:17.826289 kubelet[2492]: I0702 01:51:17.826257 2492 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 01:51:17.826289 kubelet[2492]: I0702 01:51:17.826282 2492 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 01:51:17.826289 kubelet[2492]: I0702 01:51:17.826299 2492 state_mem.go:36] "Initialized new in-memory state store" Jul 2 01:51:17.826491 kubelet[2492]: I0702 01:51:17.826442 2492 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 01:51:17.826491 kubelet[2492]: I0702 01:51:17.826463 2492 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 01:51:17.826491 kubelet[2492]: I0702 01:51:17.826470 2492 policy_none.go:49] "None policy: Start" Jul 2 01:51:17.827238 kubelet[2492]: I0702 01:51:17.827216 2492 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 01:51:17.827238 kubelet[2492]: I0702 01:51:17.827240 2492 state_mem.go:35] "Initializing new in-memory state store" Jul 2 01:51:17.827374 kubelet[2492]: I0702 01:51:17.827356 2492 state_mem.go:75] "Updated machine memory state" Jul 2 01:51:17.831238 kubelet[2492]: I0702 01:51:17.831210 2492 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 01:51:17.831434 kubelet[2492]: I0702 01:51:17.831411 2492 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 01:51:17.857517 kubelet[2492]: I0702 01:51:17.857493 2492 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:17.863775 kubelet[2492]: I0702 01:51:17.863719 2492 topology_manager.go:215] "Topology Admit Handler" podUID="f85d28cd6ab984b39bec3be85cc2e062" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:17.863913 kubelet[2492]: I0702 01:51:17.863847 2492 topology_manager.go:215] "Topology Admit Handler" podUID="d217b14d6d363ca0ff02a92a8da4c8a6" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:17.863913 kubelet[2492]: I0702 01:51:17.863909 2492 topology_manager.go:215] "Topology Admit Handler" podUID="06d89e0ebe90815a05dd039fad2e4dd8" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-267983ca13" Jul 2 01:51:17.873448 kubelet[2492]: I0702 01:51:17.873412 2492 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:17.873619 kubelet[2492]: I0702 01:51:17.873528 2492 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-267983ca13" Jul 2 01:51:17.873742 kubelet[2492]: W0702 01:51:17.873699 2492 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:17.877902 kubelet[2492]: W0702 01:51:17.877624 2492 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:17.878948 kubelet[2492]: W0702 01:51:17.878740 2492 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:18.053028 sudo[2522]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 01:51:18.053525 sudo[2522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 01:51:18.060872 kubelet[2492]: I0702 01:51:18.060740 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06d89e0ebe90815a05dd039fad2e4dd8-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-267983ca13\" (UID: \"06d89e0ebe90815a05dd039fad2e4dd8\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.060991 kubelet[2492]: I0702 01:51:18.060892 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f85d28cd6ab984b39bec3be85cc2e062-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-267983ca13\" (UID: \"f85d28cd6ab984b39bec3be85cc2e062\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.060991 kubelet[2492]: I0702 01:51:18.060940 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.060991 kubelet[2492]: I0702 01:51:18.060962 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.061079 kubelet[2492]: I0702 01:51:18.060986 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.061079 kubelet[2492]: I0702 01:51:18.061047 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f85d28cd6ab984b39bec3be85cc2e062-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-267983ca13\" (UID: \"f85d28cd6ab984b39bec3be85cc2e062\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.061079 kubelet[2492]: I0702 01:51:18.061066 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f85d28cd6ab984b39bec3be85cc2e062-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-267983ca13\" (UID: \"f85d28cd6ab984b39bec3be85cc2e062\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.061149 kubelet[2492]: I0702 01:51:18.061086 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.061149 kubelet[2492]: I0702 01:51:18.061133 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d217b14d6d363ca0ff02a92a8da4c8a6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-267983ca13\" (UID: \"d217b14d6d363ca0ff02a92a8da4c8a6\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.533370 sudo[2522]: pam_unix(sudo:session): session closed for user root Jul 2 01:51:18.723124 kubelet[2492]: I0702 01:51:18.723082 2492 apiserver.go:52] "Watching apiserver" Jul 2 01:51:18.761397 kubelet[2492]: I0702 01:51:18.761362 2492 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 01:51:18.815905 kubelet[2492]: W0702 01:51:18.815884 2492 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 01:51:18.816083 kubelet[2492]: E0702 01:51:18.816070 2492 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-267983ca13\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" Jul 2 01:51:18.833384 kubelet[2492]: I0702 01:51:18.833364 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-267983ca13" podStartSLOduration=1.8333037490000001 podCreationTimestamp="2024-07-02 01:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:18.826366576 +0000 UTC m=+1.180884531" watchObservedRunningTime="2024-07-02 01:51:18.833303749 +0000 UTC m=+1.187821704" Jul 2 01:51:18.843957 kubelet[2492]: I0702 01:51:18.843928 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-267983ca13" podStartSLOduration=1.843877987 podCreationTimestamp="2024-07-02 01:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:18.834119577 +0000 UTC m=+1.188637532" watchObservedRunningTime="2024-07-02 01:51:18.843877987 +0000 UTC m=+1.198395942" Jul 2 01:51:18.854865 kubelet[2492]: I0702 01:51:18.854829 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-267983ca13" podStartSLOduration=1.854792139 podCreationTimestamp="2024-07-02 01:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:18.844771013 +0000 UTC m=+1.199288968" watchObservedRunningTime="2024-07-02 01:51:18.854792139 +0000 UTC m=+1.209310094" Jul 2 01:51:20.497554 sudo[1753]: pam_unix(sudo:session): session closed for user root Jul 2 01:51:20.564663 sshd[1750]: pam_unix(sshd:session): session closed for user core Jul 2 01:51:20.567073 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:32856.service: Deactivated successfully. Jul 2 01:51:20.567819 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 01:51:20.567998 systemd[1]: session-7.scope: Consumed 6.723s CPU time. Jul 2 01:51:20.568382 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Jul 2 01:51:20.569103 systemd-logind[1438]: Removed session 7. Jul 2 01:51:32.415938 kubelet[2492]: I0702 01:51:32.415903 2492 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 01:51:32.416339 env[1451]: time="2024-07-02T01:51:32.416241324Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 01:51:32.416516 kubelet[2492]: I0702 01:51:32.416433 2492 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 01:51:33.298393 kubelet[2492]: I0702 01:51:33.298345 2492 topology_manager.go:215] "Topology Admit Handler" podUID="b60fd2ba-5df8-4568-b296-2895fa50ec01" podNamespace="kube-system" podName="kube-proxy-fh5rv" Jul 2 01:51:33.303078 systemd[1]: Created slice kubepods-besteffort-podb60fd2ba_5df8_4568_b296_2895fa50ec01.slice. Jul 2 01:51:33.318095 kubelet[2492]: I0702 01:51:33.318053 2492 topology_manager.go:215] "Topology Admit Handler" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" podNamespace="kube-system" podName="cilium-zfbh9" Jul 2 01:51:33.322728 systemd[1]: Created slice kubepods-burstable-pod294852f5_eec5_4860_8090_eb9124dccd1e.slice. Jul 2 01:51:33.331171 kubelet[2492]: I0702 01:51:33.331138 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cni-path\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331278 kubelet[2492]: I0702 01:51:33.331206 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-lib-modules\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331278 kubelet[2492]: I0702 01:51:33.331229 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-kernel\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331326 kubelet[2492]: I0702 01:51:33.331286 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b60fd2ba-5df8-4568-b296-2895fa50ec01-kube-proxy\") pod \"kube-proxy-fh5rv\" (UID: \"b60fd2ba-5df8-4568-b296-2895fa50ec01\") " pod="kube-system/kube-proxy-fh5rv" Jul 2 01:51:33.331326 kubelet[2492]: I0702 01:51:33.331305 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b60fd2ba-5df8-4568-b296-2895fa50ec01-xtables-lock\") pod \"kube-proxy-fh5rv\" (UID: \"b60fd2ba-5df8-4568-b296-2895fa50ec01\") " pod="kube-system/kube-proxy-fh5rv" Jul 2 01:51:33.331378 kubelet[2492]: I0702 01:51:33.331356 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-cgroup\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331378 kubelet[2492]: I0702 01:51:33.331375 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-hubble-tls\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331423 kubelet[2492]: I0702 01:51:33.331395 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtz4\" (UniqueName: \"kubernetes.io/projected/b60fd2ba-5df8-4568-b296-2895fa50ec01-kube-api-access-dhtz4\") pod \"kube-proxy-fh5rv\" (UID: \"b60fd2ba-5df8-4568-b296-2895fa50ec01\") " pod="kube-system/kube-proxy-fh5rv" Jul 2 01:51:33.331491 kubelet[2492]: I0702 01:51:33.331471 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-bpf-maps\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331539 kubelet[2492]: I0702 01:51:33.331497 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-hostproc\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331573 kubelet[2492]: I0702 01:51:33.331560 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/294852f5-eec5-4860-8090-eb9124dccd1e-clustermesh-secrets\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331598 kubelet[2492]: I0702 01:51:33.331583 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6rt\" (UniqueName: \"kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-kube-api-access-fr6rt\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331654 kubelet[2492]: I0702 01:51:33.331635 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b60fd2ba-5df8-4568-b296-2895fa50ec01-lib-modules\") pod \"kube-proxy-fh5rv\" (UID: \"b60fd2ba-5df8-4568-b296-2895fa50ec01\") " pod="kube-system/kube-proxy-fh5rv" Jul 2 01:51:33.331690 kubelet[2492]: I0702 01:51:33.331663 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-run\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331726 kubelet[2492]: I0702 01:51:33.331714 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-xtables-lock\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331770 kubelet[2492]: I0702 01:51:33.331737 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-net\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331817 kubelet[2492]: I0702 01:51:33.331801 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-etc-cni-netd\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.331849 kubelet[2492]: I0702 01:51:33.331825 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-config-path\") pod \"cilium-zfbh9\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " pod="kube-system/cilium-zfbh9" Jul 2 01:51:33.474079 kubelet[2492]: I0702 01:51:33.474051 2492 topology_manager.go:215] "Topology Admit Handler" podUID="52f38591-058f-473e-acfb-794a90308783" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-jqjnv" Jul 2 01:51:33.479323 systemd[1]: Created slice kubepods-besteffort-pod52f38591_058f_473e_acfb_794a90308783.slice. Jul 2 01:51:33.533164 kubelet[2492]: I0702 01:51:33.533136 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfw25\" (UniqueName: \"kubernetes.io/projected/52f38591-058f-473e-acfb-794a90308783-kube-api-access-lfw25\") pod \"cilium-operator-6bc8ccdb58-jqjnv\" (UID: \"52f38591-058f-473e-acfb-794a90308783\") " pod="kube-system/cilium-operator-6bc8ccdb58-jqjnv" Jul 2 01:51:33.533411 kubelet[2492]: I0702 01:51:33.533377 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52f38591-058f-473e-acfb-794a90308783-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-jqjnv\" (UID: \"52f38591-058f-473e-acfb-794a90308783\") " pod="kube-system/cilium-operator-6bc8ccdb58-jqjnv" Jul 2 01:51:33.609980 env[1451]: time="2024-07-02T01:51:33.609932050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fh5rv,Uid:b60fd2ba-5df8-4568-b296-2895fa50ec01,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:33.625208 env[1451]: time="2024-07-02T01:51:33.625170249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfbh9,Uid:294852f5-eec5-4860-8090-eb9124dccd1e,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:33.664932 env[1451]: time="2024-07-02T01:51:33.663384246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:33.664932 env[1451]: time="2024-07-02T01:51:33.663419325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:33.664932 env[1451]: time="2024-07-02T01:51:33.663439325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:33.664932 env[1451]: time="2024-07-02T01:51:33.663611163Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/877f4dec81c430edef0e6631df06ed0461c297dbb5b4feaaa7a0d8827f50bf51 pid=2578 runtime=io.containerd.runc.v2 Jul 2 01:51:33.671993 env[1451]: time="2024-07-02T01:51:33.671917476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:33.671993 env[1451]: time="2024-07-02T01:51:33.671961275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:33.672153 env[1451]: time="2024-07-02T01:51:33.671971155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:33.672153 env[1451]: time="2024-07-02T01:51:33.672078434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000 pid=2596 runtime=io.containerd.runc.v2 Jul 2 01:51:33.676512 systemd[1]: Started cri-containerd-877f4dec81c430edef0e6631df06ed0461c297dbb5b4feaaa7a0d8827f50bf51.scope. Jul 2 01:51:33.688253 systemd[1]: Started cri-containerd-c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000.scope. Jul 2 01:51:33.718638 env[1451]: time="2024-07-02T01:51:33.717726512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zfbh9,Uid:294852f5-eec5-4860-8090-eb9124dccd1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\"" Jul 2 01:51:33.721861 env[1451]: time="2024-07-02T01:51:33.720952838Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 01:51:33.723033 env[1451]: time="2024-07-02T01:51:33.723000456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fh5rv,Uid:b60fd2ba-5df8-4568-b296-2895fa50ec01,Namespace:kube-system,Attempt:0,} returns sandbox id \"877f4dec81c430edef0e6631df06ed0461c297dbb5b4feaaa7a0d8827f50bf51\"" Jul 2 01:51:33.727534 env[1451]: time="2024-07-02T01:51:33.727495529Z" level=info msg="CreateContainer within sandbox \"877f4dec81c430edef0e6631df06ed0461c297dbb5b4feaaa7a0d8827f50bf51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 01:51:33.765862 env[1451]: time="2024-07-02T01:51:33.765299490Z" level=info msg="CreateContainer within sandbox \"877f4dec81c430edef0e6631df06ed0461c297dbb5b4feaaa7a0d8827f50bf51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6bb110a93061c3e576b15dd63bf2fac9c603267dfdb9a9451de99c67a93965e\"" Jul 2 01:51:33.767308 env[1451]: time="2024-07-02T01:51:33.766041762Z" level=info msg="StartContainer for \"c6bb110a93061c3e576b15dd63bf2fac9c603267dfdb9a9451de99c67a93965e\"" Jul 2 01:51:33.782290 systemd[1]: Started cri-containerd-c6bb110a93061c3e576b15dd63bf2fac9c603267dfdb9a9451de99c67a93965e.scope. Jul 2 01:51:33.785195 env[1451]: time="2024-07-02T01:51:33.784779844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-jqjnv,Uid:52f38591-058f-473e-acfb-794a90308783,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:33.814363 env[1451]: time="2024-07-02T01:51:33.814309172Z" level=info msg="StartContainer for \"c6bb110a93061c3e576b15dd63bf2fac9c603267dfdb9a9451de99c67a93965e\" returns successfully" Jul 2 01:51:33.834152 kubelet[2492]: I0702 01:51:33.833853 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fh5rv" podStartSLOduration=0.833819086 podCreationTimestamp="2024-07-02 01:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:33.832894416 +0000 UTC m=+16.187412371" watchObservedRunningTime="2024-07-02 01:51:33.833819086 +0000 UTC m=+16.188337041" Jul 2 01:51:33.840995 env[1451]: time="2024-07-02T01:51:33.840921811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:33.841219 env[1451]: time="2024-07-02T01:51:33.841158888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:33.841219 env[1451]: time="2024-07-02T01:51:33.841175088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:33.841516 env[1451]: time="2024-07-02T01:51:33.841443085Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a pid=2698 runtime=io.containerd.runc.v2 Jul 2 01:51:33.855043 systemd[1]: Started cri-containerd-4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a.scope. Jul 2 01:51:33.889684 env[1451]: time="2024-07-02T01:51:33.889591977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-jqjnv,Uid:52f38591-058f-473e-acfb-794a90308783,Namespace:kube-system,Attempt:0,} returns sandbox id \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\"" Jul 2 01:51:38.275777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839118200.mount: Deactivated successfully. Jul 2 01:51:40.496861 env[1451]: time="2024-07-02T01:51:40.496812672Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:40.505346 env[1451]: time="2024-07-02T01:51:40.505310955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:40.510313 env[1451]: time="2024-07-02T01:51:40.510275751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:40.510928 env[1451]: time="2024-07-02T01:51:40.510902025Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 01:51:40.515777 env[1451]: time="2024-07-02T01:51:40.515728502Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 01:51:40.516663 env[1451]: time="2024-07-02T01:51:40.516630653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 01:51:40.558052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607126998.mount: Deactivated successfully. Jul 2 01:51:40.578300 env[1451]: time="2024-07-02T01:51:40.578257499Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\"" Jul 2 01:51:40.578848 env[1451]: time="2024-07-02T01:51:40.578825174Z" level=info msg="StartContainer for \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\"" Jul 2 01:51:40.597521 systemd[1]: Started cri-containerd-630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3.scope. Jul 2 01:51:40.630921 env[1451]: time="2024-07-02T01:51:40.630872786Z" level=info msg="StartContainer for \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\" returns successfully" Jul 2 01:51:40.637900 systemd[1]: cri-containerd-630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3.scope: Deactivated successfully. Jul 2 01:51:41.556446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3-rootfs.mount: Deactivated successfully. Jul 2 01:51:42.438276 env[1451]: time="2024-07-02T01:51:42.438230172Z" level=info msg="shim disconnected" id=630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3 Jul 2 01:51:42.438659 env[1451]: time="2024-07-02T01:51:42.438639529Z" level=warning msg="cleaning up after shim disconnected" id=630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3 namespace=k8s.io Jul 2 01:51:42.438743 env[1451]: time="2024-07-02T01:51:42.438730328Z" level=info msg="cleaning up dead shim" Jul 2 01:51:42.445447 env[1451]: time="2024-07-02T01:51:42.445414111Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2900 runtime=io.containerd.runc.v2\n" Jul 2 01:51:42.865144 env[1451]: time="2024-07-02T01:51:42.865094018Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 01:51:42.914870 env[1451]: time="2024-07-02T01:51:42.914814070Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\"" Jul 2 01:51:42.915719 env[1451]: time="2024-07-02T01:51:42.915684302Z" level=info msg="StartContainer for \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\"" Jul 2 01:51:42.939975 systemd[1]: Started cri-containerd-712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0.scope. Jul 2 01:51:42.972338 env[1451]: time="2024-07-02T01:51:42.972277135Z" level=info msg="StartContainer for \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\" returns successfully" Jul 2 01:51:42.978949 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 01:51:42.979134 systemd[1]: Stopped systemd-sysctl.service. Jul 2 01:51:42.979287 systemd[1]: Stopping systemd-sysctl.service... Jul 2 01:51:42.980700 systemd[1]: Starting systemd-sysctl.service... Jul 2 01:51:42.985707 systemd[1]: cri-containerd-712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0.scope: Deactivated successfully. Jul 2 01:51:42.991175 systemd[1]: Finished systemd-sysctl.service. Jul 2 01:51:43.019860 env[1451]: time="2024-07-02T01:51:43.019813209Z" level=info msg="shim disconnected" id=712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0 Jul 2 01:51:43.019860 env[1451]: time="2024-07-02T01:51:43.019855249Z" level=warning msg="cleaning up after shim disconnected" id=712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0 namespace=k8s.io Jul 2 01:51:43.019860 env[1451]: time="2024-07-02T01:51:43.019864329Z" level=info msg="cleaning up dead shim" Jul 2 01:51:43.026264 env[1451]: time="2024-07-02T01:51:43.026223075Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964 runtime=io.containerd.runc.v2\n" Jul 2 01:51:43.870170 env[1451]: time="2024-07-02T01:51:43.870129965Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 01:51:43.894356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0-rootfs.mount: Deactivated successfully. Jul 2 01:51:43.907677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400373083.mount: Deactivated successfully. Jul 2 01:51:43.911545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83359629.mount: Deactivated successfully. Jul 2 01:51:43.927928 env[1451]: time="2024-07-02T01:51:43.927887519Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\"" Jul 2 01:51:43.930331 env[1451]: time="2024-07-02T01:51:43.930302499Z" level=info msg="StartContainer for \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\"" Jul 2 01:51:43.945474 systemd[1]: Started cri-containerd-a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c.scope. Jul 2 01:51:43.986177 systemd[1]: cri-containerd-a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c.scope: Deactivated successfully. Jul 2 01:51:43.988936 env[1451]: time="2024-07-02T01:51:43.988888325Z" level=info msg="StartContainer for \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\" returns successfully" Jul 2 01:51:44.231457 env[1451]: time="2024-07-02T01:51:44.231344803Z" level=info msg="shim disconnected" id=a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c Jul 2 01:51:44.231457 env[1451]: time="2024-07-02T01:51:44.231391043Z" level=warning msg="cleaning up after shim disconnected" id=a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c namespace=k8s.io Jul 2 01:51:44.231457 env[1451]: time="2024-07-02T01:51:44.231402163Z" level=info msg="cleaning up dead shim" Jul 2 01:51:44.250799 env[1451]: time="2024-07-02T01:51:44.250744924Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3024 runtime=io.containerd.runc.v2\n" Jul 2 01:51:44.421790 env[1451]: time="2024-07-02T01:51:44.421727634Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:44.428349 env[1451]: time="2024-07-02T01:51:44.428317659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:44.432230 env[1451]: time="2024-07-02T01:51:44.432202467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 01:51:44.432992 env[1451]: time="2024-07-02T01:51:44.432964461Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 01:51:44.435785 env[1451]: time="2024-07-02T01:51:44.435087803Z" level=info msg="CreateContainer within sandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 01:51:44.474614 env[1451]: time="2024-07-02T01:51:44.474571678Z" level=info msg="CreateContainer within sandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\"" Jul 2 01:51:44.477135 env[1451]: time="2024-07-02T01:51:44.477109817Z" level=info msg="StartContainer for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\"" Jul 2 01:51:44.492354 systemd[1]: Started cri-containerd-6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db.scope. Jul 2 01:51:44.522274 env[1451]: time="2024-07-02T01:51:44.522220725Z" level=info msg="StartContainer for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" returns successfully" Jul 2 01:51:44.871673 env[1451]: time="2024-07-02T01:51:44.871623484Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 01:51:44.904147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445899347.mount: Deactivated successfully. Jul 2 01:51:44.913720 env[1451]: time="2024-07-02T01:51:44.913678737Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\"" Jul 2 01:51:44.914539 env[1451]: time="2024-07-02T01:51:44.914511610Z" level=info msg="StartContainer for \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\"" Jul 2 01:51:44.941672 systemd[1]: Started cri-containerd-1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086.scope. Jul 2 01:51:44.994733 systemd[1]: cri-containerd-1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086.scope: Deactivated successfully. Jul 2 01:51:44.997467 env[1451]: time="2024-07-02T01:51:44.997425166Z" level=info msg="StartContainer for \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\" returns successfully" Jul 2 01:51:45.123270 env[1451]: time="2024-07-02T01:51:45.123161791Z" level=info msg="shim disconnected" id=1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086 Jul 2 01:51:45.123523 env[1451]: time="2024-07-02T01:51:45.123502588Z" level=warning msg="cleaning up after shim disconnected" id=1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086 namespace=k8s.io Jul 2 01:51:45.123610 env[1451]: time="2024-07-02T01:51:45.123595827Z" level=info msg="cleaning up dead shim" Jul 2 01:51:45.131879 env[1451]: time="2024-07-02T01:51:45.131847121Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:51:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3118 runtime=io.containerd.runc.v2\n" Jul 2 01:51:45.880085 env[1451]: time="2024-07-02T01:51:45.880047200Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 01:51:45.894845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086-rootfs.mount: Deactivated successfully. Jul 2 01:51:45.896426 kubelet[2492]: I0702 01:51:45.896378 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-jqjnv" podStartSLOduration=2.354740966 podCreationTimestamp="2024-07-02 01:51:33 +0000 UTC" firstStartedPulling="2024-07-02 01:51:33.891787354 +0000 UTC m=+16.246305309" lastFinishedPulling="2024-07-02 01:51:44.433386657 +0000 UTC m=+26.787904612" observedRunningTime="2024-07-02 01:51:44.917947902 +0000 UTC m=+27.272465857" watchObservedRunningTime="2024-07-02 01:51:45.896340269 +0000 UTC m=+28.250858224" Jul 2 01:51:45.923648 env[1451]: time="2024-07-02T01:51:45.923600129Z" level=info msg="CreateContainer within sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\"" Jul 2 01:51:45.925549 env[1451]: time="2024-07-02T01:51:45.925523793Z" level=info msg="StartContainer for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\"" Jul 2 01:51:45.953907 systemd[1]: Started cri-containerd-6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5.scope. Jul 2 01:51:45.990560 env[1451]: time="2024-07-02T01:51:45.990513948Z" level=info msg="StartContainer for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" returns successfully" Jul 2 01:51:46.106783 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 01:51:46.127219 kubelet[2492]: I0702 01:51:46.127050 2492 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 01:51:46.149966 kubelet[2492]: I0702 01:51:46.149858 2492 topology_manager.go:215] "Topology Admit Handler" podUID="a46b2074-cd68-46fd-bff2-9575011c9f2f" podNamespace="kube-system" podName="coredns-5dd5756b68-l5bkh" Jul 2 01:51:46.155229 systemd[1]: Created slice kubepods-burstable-poda46b2074_cd68_46fd_bff2_9575011c9f2f.slice. Jul 2 01:51:46.160416 kubelet[2492]: I0702 01:51:46.160386 2492 topology_manager.go:215] "Topology Admit Handler" podUID="939c5953-e664-42d4-a9f9-e8cfef9a0247" podNamespace="kube-system" podName="coredns-5dd5756b68-7rxxj" Jul 2 01:51:46.165273 systemd[1]: Created slice kubepods-burstable-pod939c5953_e664_42d4_a9f9_e8cfef9a0247.slice. Jul 2 01:51:46.212120 kubelet[2492]: I0702 01:51:46.212082 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/939c5953-e664-42d4-a9f9-e8cfef9a0247-config-volume\") pod \"coredns-5dd5756b68-7rxxj\" (UID: \"939c5953-e664-42d4-a9f9-e8cfef9a0247\") " pod="kube-system/coredns-5dd5756b68-7rxxj" Jul 2 01:51:46.212120 kubelet[2492]: I0702 01:51:46.212131 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a46b2074-cd68-46fd-bff2-9575011c9f2f-config-volume\") pod \"coredns-5dd5756b68-l5bkh\" (UID: \"a46b2074-cd68-46fd-bff2-9575011c9f2f\") " pod="kube-system/coredns-5dd5756b68-l5bkh" Jul 2 01:51:46.212310 kubelet[2492]: I0702 01:51:46.212170 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv25\" (UniqueName: \"kubernetes.io/projected/939c5953-e664-42d4-a9f9-e8cfef9a0247-kube-api-access-nmv25\") pod \"coredns-5dd5756b68-7rxxj\" (UID: \"939c5953-e664-42d4-a9f9-e8cfef9a0247\") " pod="kube-system/coredns-5dd5756b68-7rxxj" Jul 2 01:51:46.212310 kubelet[2492]: I0702 01:51:46.212192 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5mr\" (UniqueName: \"kubernetes.io/projected/a46b2074-cd68-46fd-bff2-9575011c9f2f-kube-api-access-gx5mr\") pod \"coredns-5dd5756b68-l5bkh\" (UID: \"a46b2074-cd68-46fd-bff2-9575011c9f2f\") " pod="kube-system/coredns-5dd5756b68-l5bkh" Jul 2 01:51:46.459243 env[1451]: time="2024-07-02T01:51:46.459142402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l5bkh,Uid:a46b2074-cd68-46fd-bff2-9575011c9f2f,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:46.468562 env[1451]: time="2024-07-02T01:51:46.468489768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7rxxj,Uid:939c5953-e664-42d4-a9f9-e8cfef9a0247,Namespace:kube-system,Attempt:0,}" Jul 2 01:51:46.676788 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 01:51:48.324127 systemd-networkd[1604]: cilium_host: Link UP Jul 2 01:51:48.324221 systemd-networkd[1604]: cilium_net: Link UP Jul 2 01:51:48.335003 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 01:51:48.335072 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 01:51:48.330043 systemd-networkd[1604]: cilium_net: Gained carrier Jul 2 01:51:48.335481 systemd-networkd[1604]: cilium_host: Gained carrier Jul 2 01:51:48.492631 systemd-networkd[1604]: cilium_vxlan: Link UP Jul 2 01:51:48.492638 systemd-networkd[1604]: cilium_vxlan: Gained carrier Jul 2 01:51:48.798800 kernel: NET: Registered PF_ALG protocol family Jul 2 01:51:49.072902 systemd-networkd[1604]: cilium_net: Gained IPv6LL Jul 2 01:51:49.073160 systemd-networkd[1604]: cilium_host: Gained IPv6LL Jul 2 01:51:49.588553 systemd-networkd[1604]: lxc_health: Link UP Jul 2 01:51:49.606493 systemd-networkd[1604]: lxc_health: Gained carrier Jul 2 01:51:49.607041 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 01:51:49.646827 kubelet[2492]: I0702 01:51:49.646515 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zfbh9" podStartSLOduration=9.855100818 podCreationTimestamp="2024-07-02 01:51:33 +0000 UTC" firstStartedPulling="2024-07-02 01:51:33.719976208 +0000 UTC m=+16.074494163" lastFinishedPulling="2024-07-02 01:51:40.511351501 +0000 UTC m=+22.865869456" observedRunningTime="2024-07-02 01:51:46.902573856 +0000 UTC m=+29.257091851" watchObservedRunningTime="2024-07-02 01:51:49.646476111 +0000 UTC m=+32.000994066" Jul 2 01:51:50.038991 systemd-networkd[1604]: lxccf1d38840bd1: Link UP Jul 2 01:51:50.046798 kernel: eth0: renamed from tmpc337b Jul 2 01:51:50.056829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf1d38840bd1: link becomes ready Jul 2 01:51:50.057880 systemd-networkd[1604]: lxccf1d38840bd1: Gained carrier Jul 2 01:51:50.069608 systemd-networkd[1604]: lxceef124675b2d: Link UP Jul 2 01:51:50.081786 kernel: eth0: renamed from tmpe9492 Jul 2 01:51:50.092786 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceef124675b2d: link becomes ready Jul 2 01:51:50.093513 systemd-networkd[1604]: lxceef124675b2d: Gained carrier Jul 2 01:51:50.095844 systemd-networkd[1604]: cilium_vxlan: Gained IPv6LL Jul 2 01:51:51.183979 systemd-networkd[1604]: lxc_health: Gained IPv6LL Jul 2 01:51:52.015881 systemd-networkd[1604]: lxceef124675b2d: Gained IPv6LL Jul 2 01:51:52.080865 systemd-networkd[1604]: lxccf1d38840bd1: Gained IPv6LL Jul 2 01:51:53.717161 env[1451]: time="2024-07-02T01:51:53.717065803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:53.717479 env[1451]: time="2024-07-02T01:51:53.717171922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:53.717479 env[1451]: time="2024-07-02T01:51:53.717200002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:53.718832 env[1451]: time="2024-07-02T01:51:53.717955717Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e949239fe9e5fd6e456394feb62f90c989a4f97d6a50c25783d73108a5f30abc pid=3669 runtime=io.containerd.runc.v2 Jul 2 01:51:53.725978 env[1451]: time="2024-07-02T01:51:53.725747343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:51:53.726170 env[1451]: time="2024-07-02T01:51:53.726145820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:51:53.726265 env[1451]: time="2024-07-02T01:51:53.726244860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:51:53.726508 env[1451]: time="2024-07-02T01:51:53.726467738Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c337b1813dffe936db5e443d194645503248fd60e40c0fc78fdefb52644b7beb pid=3683 runtime=io.containerd.runc.v2 Jul 2 01:51:53.757167 systemd[1]: run-containerd-runc-k8s.io-e949239fe9e5fd6e456394feb62f90c989a4f97d6a50c25783d73108a5f30abc-runc.yXXzlr.mount: Deactivated successfully. Jul 2 01:51:53.762020 systemd[1]: Started cri-containerd-c337b1813dffe936db5e443d194645503248fd60e40c0fc78fdefb52644b7beb.scope. Jul 2 01:51:53.766072 systemd[1]: Started cri-containerd-e949239fe9e5fd6e456394feb62f90c989a4f97d6a50c25783d73108a5f30abc.scope. Jul 2 01:51:53.826887 env[1451]: time="2024-07-02T01:51:53.825934335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l5bkh,Uid:a46b2074-cd68-46fd-bff2-9575011c9f2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e949239fe9e5fd6e456394feb62f90c989a4f97d6a50c25783d73108a5f30abc\"" Jul 2 01:51:53.827229 env[1451]: time="2024-07-02T01:51:53.827192726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7rxxj,Uid:939c5953-e664-42d4-a9f9-e8cfef9a0247,Namespace:kube-system,Attempt:0,} returns sandbox id \"c337b1813dffe936db5e443d194645503248fd60e40c0fc78fdefb52644b7beb\"" Jul 2 01:51:53.830401 env[1451]: time="2024-07-02T01:51:53.830363904Z" level=info msg="CreateContainer within sandbox \"e949239fe9e5fd6e456394feb62f90c989a4f97d6a50c25783d73108a5f30abc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 01:51:53.831636 env[1451]: time="2024-07-02T01:51:53.831596816Z" level=info msg="CreateContainer within sandbox \"c337b1813dffe936db5e443d194645503248fd60e40c0fc78fdefb52644b7beb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 01:51:53.876089 env[1451]: time="2024-07-02T01:51:53.876039511Z" level=info msg="CreateContainer within sandbox \"e949239fe9e5fd6e456394feb62f90c989a4f97d6a50c25783d73108a5f30abc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09ec47021f657277979ce7bcf4ede18ba3e6992621277f1ca0b45835deb34637\"" Jul 2 01:51:53.876994 env[1451]: time="2024-07-02T01:51:53.876955464Z" level=info msg="StartContainer for \"09ec47021f657277979ce7bcf4ede18ba3e6992621277f1ca0b45835deb34637\"" Jul 2 01:51:53.893669 env[1451]: time="2024-07-02T01:51:53.893630030Z" level=info msg="CreateContainer within sandbox \"c337b1813dffe936db5e443d194645503248fd60e40c0fc78fdefb52644b7beb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f433734cc5d7353229ed098dd733d26b6ffbab2dd5651e7ec08e6d5cbb6a1305\"" Jul 2 01:51:53.899793 env[1451]: time="2024-07-02T01:51:53.895988774Z" level=info msg="StartContainer for \"f433734cc5d7353229ed098dd733d26b6ffbab2dd5651e7ec08e6d5cbb6a1305\"" Jul 2 01:51:53.914150 systemd[1]: Started cri-containerd-09ec47021f657277979ce7bcf4ede18ba3e6992621277f1ca0b45835deb34637.scope. Jul 2 01:51:53.931043 systemd[1]: Started cri-containerd-f433734cc5d7353229ed098dd733d26b6ffbab2dd5651e7ec08e6d5cbb6a1305.scope. Jul 2 01:51:53.966169 env[1451]: time="2024-07-02T01:51:53.966116652Z" level=info msg="StartContainer for \"09ec47021f657277979ce7bcf4ede18ba3e6992621277f1ca0b45835deb34637\" returns successfully" Jul 2 01:51:53.986117 env[1451]: time="2024-07-02T01:51:53.986013155Z" level=info msg="StartContainer for \"f433734cc5d7353229ed098dd733d26b6ffbab2dd5651e7ec08e6d5cbb6a1305\" returns successfully" Jul 2 01:51:54.919041 kubelet[2492]: I0702 01:51:54.919004 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7rxxj" podStartSLOduration=21.918971385 podCreationTimestamp="2024-07-02 01:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:54.917643994 +0000 UTC m=+37.272161949" watchObservedRunningTime="2024-07-02 01:51:54.918971385 +0000 UTC m=+37.273489340" Jul 2 01:51:54.929722 kubelet[2492]: I0702 01:51:54.929685 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-l5bkh" podStartSLOduration=21.929652193 podCreationTimestamp="2024-07-02 01:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:51:54.928362162 +0000 UTC m=+37.282880077" watchObservedRunningTime="2024-07-02 01:51:54.929652193 +0000 UTC m=+37.284170148" Jul 2 01:53:43.765427 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:35518.service. Jul 2 01:53:44.189502 sshd[3836]: Accepted publickey for core from 10.200.16.10 port 35518 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:53:44.191128 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:53:44.195221 systemd-logind[1438]: New session 8 of user core. Jul 2 01:53:44.195930 systemd[1]: Started session-8.scope. Jul 2 01:53:44.673215 sshd[3836]: pam_unix(sshd:session): session closed for user core Jul 2 01:53:44.675559 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:35518.service: Deactivated successfully. Jul 2 01:53:44.676306 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 01:53:44.676845 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Jul 2 01:53:44.677510 systemd-logind[1438]: Removed session 8. Jul 2 01:53:49.759932 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:49814.service. Jul 2 01:53:50.226365 sshd[3849]: Accepted publickey for core from 10.200.16.10 port 49814 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:53:50.227954 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:53:50.232191 systemd[1]: Started session-9.scope. Jul 2 01:53:50.232471 systemd-logind[1438]: New session 9 of user core. Jul 2 01:53:50.630508 sshd[3849]: pam_unix(sshd:session): session closed for user core Jul 2 01:53:50.632873 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:49814.service: Deactivated successfully. Jul 2 01:53:50.633633 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 01:53:50.634197 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Jul 2 01:53:50.634888 systemd-logind[1438]: Removed session 9. Jul 2 01:53:55.707577 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:49830.service. Jul 2 01:53:56.168141 sshd[3862]: Accepted publickey for core from 10.200.16.10 port 49830 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:53:56.169706 sshd[3862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:53:56.174069 systemd[1]: Started session-10.scope. Jul 2 01:53:56.174366 systemd-logind[1438]: New session 10 of user core. Jul 2 01:53:56.570188 sshd[3862]: pam_unix(sshd:session): session closed for user core Jul 2 01:53:56.572637 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:49830.service: Deactivated successfully. Jul 2 01:53:56.573407 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 01:53:56.573955 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Jul 2 01:53:56.574613 systemd-logind[1438]: Removed session 10. Jul 2 01:54:01.642391 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:55424.service. Jul 2 01:54:02.068995 sshd[3874]: Accepted publickey for core from 10.200.16.10 port 55424 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:02.070512 sshd[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:02.074688 systemd[1]: Started session-11.scope. Jul 2 01:54:02.075823 systemd-logind[1438]: New session 11 of user core. Jul 2 01:54:02.459479 sshd[3874]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:02.462952 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:55424.service: Deactivated successfully. Jul 2 01:54:02.463677 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 01:54:02.464720 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Jul 2 01:54:02.465613 systemd-logind[1438]: Removed session 11. Jul 2 01:54:02.536935 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:55426.service. Jul 2 01:54:02.999048 sshd[3886]: Accepted publickey for core from 10.200.16.10 port 55426 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:03.000587 sshd[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:03.004717 systemd[1]: Started session-12.scope. Jul 2 01:54:03.005077 systemd-logind[1438]: New session 12 of user core. Jul 2 01:54:04.011492 sshd[3886]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:04.014956 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:55426.service: Deactivated successfully. Jul 2 01:54:04.015680 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 01:54:04.016622 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Jul 2 01:54:04.017439 systemd-logind[1438]: Removed session 12. Jul 2 01:54:04.089080 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:55438.service. Jul 2 01:54:04.514990 sshd[3898]: Accepted publickey for core from 10.200.16.10 port 55438 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:04.516230 sshd[3898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:04.520362 systemd[1]: Started session-13.scope. Jul 2 01:54:04.521601 systemd-logind[1438]: New session 13 of user core. Jul 2 01:54:04.901988 sshd[3898]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:04.904385 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Jul 2 01:54:04.905443 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:55438.service: Deactivated successfully. Jul 2 01:54:04.906150 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 01:54:04.907194 systemd-logind[1438]: Removed session 13. Jul 2 01:54:09.977638 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:59206.service. Jul 2 01:54:10.403235 sshd[3909]: Accepted publickey for core from 10.200.16.10 port 59206 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:10.404855 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:10.409235 systemd[1]: Started session-14.scope. Jul 2 01:54:10.409529 systemd-logind[1438]: New session 14 of user core. Jul 2 01:54:10.786218 sshd[3909]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:10.788616 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:59206.service: Deactivated successfully. Jul 2 01:54:10.789359 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 01:54:10.789909 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Jul 2 01:54:10.790556 systemd-logind[1438]: Removed session 14. Jul 2 01:54:15.865786 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:59208.service. Jul 2 01:54:16.331812 sshd[3922]: Accepted publickey for core from 10.200.16.10 port 59208 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:16.333415 sshd[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:16.337864 systemd[1]: Started session-15.scope. Jul 2 01:54:16.338182 systemd-logind[1438]: New session 15 of user core. Jul 2 01:54:16.733164 sshd[3922]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:16.735905 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:59208.service: Deactivated successfully. Jul 2 01:54:16.736625 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 01:54:16.737316 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Jul 2 01:54:16.738062 systemd-logind[1438]: Removed session 15. Jul 2 01:54:16.812115 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:59216.service. Jul 2 01:54:17.278951 sshd[3933]: Accepted publickey for core from 10.200.16.10 port 59216 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:17.280520 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:17.284708 systemd[1]: Started session-16.scope. Jul 2 01:54:17.285242 systemd-logind[1438]: New session 16 of user core. Jul 2 01:54:17.737405 sshd[3933]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:17.740128 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:59216.service: Deactivated successfully. Jul 2 01:54:17.740862 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 01:54:17.740924 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Jul 2 01:54:17.742138 systemd-logind[1438]: Removed session 16. Jul 2 01:54:17.807970 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:59226.service. Jul 2 01:54:18.235043 sshd[3944]: Accepted publickey for core from 10.200.16.10 port 59226 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:18.236312 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:18.240514 systemd[1]: Started session-17.scope. Jul 2 01:54:18.240876 systemd-logind[1438]: New session 17 of user core. Jul 2 01:54:19.308634 sshd[3944]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:19.311312 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:59226.service: Deactivated successfully. Jul 2 01:54:19.312092 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 01:54:19.313042 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Jul 2 01:54:19.313810 systemd-logind[1438]: Removed session 17. Jul 2 01:54:19.379563 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:35430.service. Jul 2 01:54:19.805908 sshd[3961]: Accepted publickey for core from 10.200.16.10 port 35430 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:19.807452 sshd[3961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:19.810760 systemd-logind[1438]: New session 18 of user core. Jul 2 01:54:19.813514 systemd[1]: Started session-18.scope. Jul 2 01:54:20.333370 sshd[3961]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:20.336238 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:35430.service: Deactivated successfully. Jul 2 01:54:20.337504 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 01:54:20.338450 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Jul 2 01:54:20.339450 systemd-logind[1438]: Removed session 18. Jul 2 01:54:20.416865 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:35440.service. Jul 2 01:54:20.882748 sshd[3971]: Accepted publickey for core from 10.200.16.10 port 35440 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:20.883974 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:20.888359 systemd[1]: Started session-19.scope. Jul 2 01:54:20.888639 systemd-logind[1438]: New session 19 of user core. Jul 2 01:54:21.296113 sshd[3971]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:21.298871 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:35440.service: Deactivated successfully. Jul 2 01:54:21.299574 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 01:54:21.300009 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Jul 2 01:54:21.300643 systemd-logind[1438]: Removed session 19. Jul 2 01:54:26.368257 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:35456.service. Jul 2 01:54:26.800402 sshd[3985]: Accepted publickey for core from 10.200.16.10 port 35456 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:26.802000 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:26.806194 systemd[1]: Started session-20.scope. Jul 2 01:54:26.806778 systemd-logind[1438]: New session 20 of user core. Jul 2 01:54:27.188044 sshd[3985]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:27.190821 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:35456.service: Deactivated successfully. Jul 2 01:54:27.191524 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 01:54:27.192446 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Jul 2 01:54:27.193179 systemd-logind[1438]: Removed session 20. Jul 2 01:54:32.259738 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:59750.service. Jul 2 01:54:32.685647 sshd[3996]: Accepted publickey for core from 10.200.16.10 port 59750 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:32.687244 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:32.691359 systemd[1]: Started session-21.scope. Jul 2 01:54:32.692589 systemd-logind[1438]: New session 21 of user core. Jul 2 01:54:33.059392 sshd[3996]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:33.062409 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:59750.service: Deactivated successfully. Jul 2 01:54:33.063153 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 01:54:33.063672 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Jul 2 01:54:33.064362 systemd-logind[1438]: Removed session 21. Jul 2 01:54:38.130873 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:59760.service. Jul 2 01:54:38.557582 sshd[4010]: Accepted publickey for core from 10.200.16.10 port 59760 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:38.559130 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:38.563460 systemd[1]: Started session-22.scope. Jul 2 01:54:38.564708 systemd-logind[1438]: New session 22 of user core. Jul 2 01:54:38.937135 sshd[4010]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:38.939547 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Jul 2 01:54:38.939717 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:59760.service: Deactivated successfully. Jul 2 01:54:38.940464 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 01:54:38.941332 systemd-logind[1438]: Removed session 22. Jul 2 01:54:39.012237 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:35902.service. Jul 2 01:54:39.438528 sshd[4022]: Accepted publickey for core from 10.200.16.10 port 35902 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:39.439861 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:39.443594 systemd-logind[1438]: New session 23 of user core. Jul 2 01:54:39.444069 systemd[1]: Started session-23.scope. Jul 2 01:54:41.810631 env[1451]: time="2024-07-02T01:54:41.810592555Z" level=info msg="StopContainer for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" with timeout 30 (s)" Jul 2 01:54:41.811366 env[1451]: time="2024-07-02T01:54:41.811335117Z" level=info msg="Stop container \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" with signal terminated" Jul 2 01:54:41.823745 systemd[1]: cri-containerd-6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db.scope: Deactivated successfully. Jul 2 01:54:41.839805 env[1451]: time="2024-07-02T01:54:41.839738588Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 01:54:41.842908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db-rootfs.mount: Deactivated successfully. Jul 2 01:54:41.848979 env[1451]: time="2024-07-02T01:54:41.848911851Z" level=info msg="StopContainer for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" with timeout 2 (s)" Jul 2 01:54:41.849340 env[1451]: time="2024-07-02T01:54:41.849319572Z" level=info msg="Stop container \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" with signal terminated" Jul 2 01:54:41.855239 systemd-networkd[1604]: lxc_health: Link DOWN Jul 2 01:54:41.855246 systemd-networkd[1604]: lxc_health: Lost carrier Jul 2 01:54:41.874399 systemd[1]: cri-containerd-6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5.scope: Deactivated successfully. Jul 2 01:54:41.874699 systemd[1]: cri-containerd-6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5.scope: Consumed 6.412s CPU time. Jul 2 01:54:41.896386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5-rootfs.mount: Deactivated successfully. Jul 2 01:54:41.919337 env[1451]: time="2024-07-02T01:54:41.919258788Z" level=info msg="shim disconnected" id=6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db Jul 2 01:54:41.919675 env[1451]: time="2024-07-02T01:54:41.919647709Z" level=warning msg="cleaning up after shim disconnected" id=6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db namespace=k8s.io Jul 2 01:54:41.919874 env[1451]: time="2024-07-02T01:54:41.919859990Z" level=info msg="cleaning up dead shim" Jul 2 01:54:41.920173 env[1451]: time="2024-07-02T01:54:41.919827630Z" level=info msg="shim disconnected" id=6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5 Jul 2 01:54:41.920257 env[1451]: time="2024-07-02T01:54:41.920242471Z" level=warning msg="cleaning up after shim disconnected" id=6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5 namespace=k8s.io Jul 2 01:54:41.920311 env[1451]: time="2024-07-02T01:54:41.920299471Z" level=info msg="cleaning up dead shim" Jul 2 01:54:41.927613 env[1451]: time="2024-07-02T01:54:41.927578769Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" Jul 2 01:54:41.928939 env[1451]: time="2024-07-02T01:54:41.928893413Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n" Jul 2 01:54:41.933345 env[1451]: time="2024-07-02T01:54:41.933316984Z" level=info msg="StopContainer for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" returns successfully" Jul 2 01:54:41.934084 env[1451]: time="2024-07-02T01:54:41.934027146Z" level=info msg="StopPodSandbox for \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\"" Jul 2 01:54:41.934240 env[1451]: time="2024-07-02T01:54:41.934219506Z" level=info msg="Container to stop \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:41.935071 env[1451]: time="2024-07-02T01:54:41.935047348Z" level=info msg="StopContainer for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" returns successfully" Jul 2 01:54:41.935475 env[1451]: time="2024-07-02T01:54:41.935446869Z" level=info msg="StopPodSandbox for \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\"" Jul 2 01:54:41.935599 env[1451]: time="2024-07-02T01:54:41.935579429Z" level=info msg="Container to stop \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:41.935682 env[1451]: time="2024-07-02T01:54:41.935665630Z" level=info msg="Container to stop \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:41.935764 env[1451]: time="2024-07-02T01:54:41.935725950Z" level=info msg="Container to stop \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:41.935858 env[1451]: time="2024-07-02T01:54:41.935839070Z" level=info msg="Container to stop \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:41.935934 env[1451]: time="2024-07-02T01:54:41.935918470Z" level=info msg="Container to stop \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 01:54:41.937564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a-shm.mount: Deactivated successfully. Jul 2 01:54:41.937663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000-shm.mount: Deactivated successfully. Jul 2 01:54:41.941728 systemd[1]: cri-containerd-4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a.scope: Deactivated successfully. Jul 2 01:54:41.948989 systemd[1]: cri-containerd-c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000.scope: Deactivated successfully. Jul 2 01:54:41.967673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a-rootfs.mount: Deactivated successfully. Jul 2 01:54:41.984479 env[1451]: time="2024-07-02T01:54:41.984436872Z" level=info msg="shim disconnected" id=4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a Jul 2 01:54:41.984737 env[1451]: time="2024-07-02T01:54:41.984718593Z" level=warning msg="cleaning up after shim disconnected" id=4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a namespace=k8s.io Jul 2 01:54:41.984827 env[1451]: time="2024-07-02T01:54:41.984814073Z" level=info msg="cleaning up dead shim" Jul 2 01:54:41.985507 env[1451]: time="2024-07-02T01:54:41.984672673Z" level=info msg="shim disconnected" id=c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000 Jul 2 01:54:41.985603 env[1451]: time="2024-07-02T01:54:41.985587475Z" level=warning msg="cleaning up after shim disconnected" id=c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000 namespace=k8s.io Jul 2 01:54:41.985711 env[1451]: time="2024-07-02T01:54:41.985692636Z" level=info msg="cleaning up dead shim" Jul 2 01:54:41.992574 env[1451]: time="2024-07-02T01:54:41.992510293Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4152 runtime=io.containerd.runc.v2\n" Jul 2 01:54:41.992945 env[1451]: time="2024-07-02T01:54:41.992914854Z" level=info msg="TearDown network for sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" successfully" Jul 2 01:54:41.992995 env[1451]: time="2024-07-02T01:54:41.992943174Z" level=info msg="StopPodSandbox for \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" returns successfully" Jul 2 01:54:42.001894 env[1451]: time="2024-07-02T01:54:42.001846596Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4151 runtime=io.containerd.runc.v2\n" Jul 2 01:54:42.002325 env[1451]: time="2024-07-02T01:54:42.002284477Z" level=info msg="TearDown network for sandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" successfully" Jul 2 01:54:42.002443 env[1451]: time="2024-07-02T01:54:42.002424438Z" level=info msg="StopPodSandbox for \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" returns successfully" Jul 2 01:54:42.074278 kubelet[2492]: I0702 01:54:42.074156 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-hubble-tls\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.074656 kubelet[2492]: I0702 01:54:42.074642 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-xtables-lock\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.074790 kubelet[2492]: I0702 01:54:42.074745 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-config-path\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.074882 kubelet[2492]: I0702 01:54:42.074872 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cni-path\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.074971 kubelet[2492]: I0702 01:54:42.074963 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-kernel\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075060 kubelet[2492]: I0702 01:54:42.075051 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/294852f5-eec5-4860-8090-eb9124dccd1e-clustermesh-secrets\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075142 kubelet[2492]: I0702 01:54:42.075133 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-cgroup\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075219 kubelet[2492]: I0702 01:54:42.075211 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-hostproc\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075299 kubelet[2492]: I0702 01:54:42.075291 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-run\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075390 kubelet[2492]: I0702 01:54:42.075381 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfw25\" (UniqueName: \"kubernetes.io/projected/52f38591-058f-473e-acfb-794a90308783-kube-api-access-lfw25\") pod \"52f38591-058f-473e-acfb-794a90308783\" (UID: \"52f38591-058f-473e-acfb-794a90308783\") " Jul 2 01:54:42.075477 kubelet[2492]: I0702 01:54:42.075468 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52f38591-058f-473e-acfb-794a90308783-cilium-config-path\") pod \"52f38591-058f-473e-acfb-794a90308783\" (UID: \"52f38591-058f-473e-acfb-794a90308783\") " Jul 2 01:54:42.075557 kubelet[2492]: I0702 01:54:42.075549 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-lib-modules\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075638 kubelet[2492]: I0702 01:54:42.075630 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-net\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075718 kubelet[2492]: I0702 01:54:42.075709 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-etc-cni-netd\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075800 kubelet[2492]: I0702 01:54:42.075792 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-bpf-maps\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.075888 kubelet[2492]: I0702 01:54:42.075879 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr6rt\" (UniqueName: \"kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-kube-api-access-fr6rt\") pod \"294852f5-eec5-4860-8090-eb9124dccd1e\" (UID: \"294852f5-eec5-4860-8090-eb9124dccd1e\") " Jul 2 01:54:42.077245 kubelet[2492]: I0702 01:54:42.077215 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 01:54:42.077417 kubelet[2492]: I0702 01:54:42.077270 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cni-path" (OuterVolumeSpecName: "cni-path") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.077417 kubelet[2492]: I0702 01:54:42.075404 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.077417 kubelet[2492]: I0702 01:54:42.075425 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.077417 kubelet[2492]: I0702 01:54:42.077304 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.077417 kubelet[2492]: I0702 01:54:42.077320 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-hostproc" (OuterVolumeSpecName: "hostproc") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.077591 kubelet[2492]: I0702 01:54:42.077336 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.079918 kubelet[2492]: I0702 01:54:42.079890 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-kube-api-access-fr6rt" (OuterVolumeSpecName: "kube-api-access-fr6rt") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "kube-api-access-fr6rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:42.080082 kubelet[2492]: I0702 01:54:42.080056 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:42.080170 kubelet[2492]: I0702 01:54:42.080158 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.080259 kubelet[2492]: I0702 01:54:42.080247 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.080346 kubelet[2492]: I0702 01:54:42.080333 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.080430 kubelet[2492]: I0702 01:54:42.080419 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:42.081705 kubelet[2492]: I0702 01:54:42.081672 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f38591-058f-473e-acfb-794a90308783-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "52f38591-058f-473e-acfb-794a90308783" (UID: "52f38591-058f-473e-acfb-794a90308783"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 01:54:42.082948 kubelet[2492]: I0702 01:54:42.082928 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294852f5-eec5-4860-8090-eb9124dccd1e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "294852f5-eec5-4860-8090-eb9124dccd1e" (UID: "294852f5-eec5-4860-8090-eb9124dccd1e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 01:54:42.083897 kubelet[2492]: I0702 01:54:42.083869 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f38591-058f-473e-acfb-794a90308783-kube-api-access-lfw25" (OuterVolumeSpecName: "kube-api-access-lfw25") pod "52f38591-058f-473e-acfb-794a90308783" (UID: "52f38591-058f-473e-acfb-794a90308783"). InnerVolumeSpecName "kube-api-access-lfw25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:42.177077 kubelet[2492]: I0702 01:54:42.177039 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-cgroup\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177077 kubelet[2492]: I0702 01:54:42.177076 2492 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-hostproc\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177077 kubelet[2492]: I0702 01:54:42.177087 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-run\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177102 2492 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lfw25\" (UniqueName: \"kubernetes.io/projected/52f38591-058f-473e-acfb-794a90308783-kube-api-access-lfw25\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177113 2492 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-etc-cni-netd\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177123 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52f38591-058f-473e-acfb-794a90308783-cilium-config-path\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177133 2492 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-lib-modules\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177144 2492 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-net\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177155 2492 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fr6rt\" (UniqueName: \"kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-kube-api-access-fr6rt\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177165 2492 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-bpf-maps\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177279 kubelet[2492]: I0702 01:54:42.177175 2492 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/294852f5-eec5-4860-8090-eb9124dccd1e-hubble-tls\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177459 kubelet[2492]: I0702 01:54:42.177185 2492 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-xtables-lock\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177459 kubelet[2492]: I0702 01:54:42.177198 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/294852f5-eec5-4860-8090-eb9124dccd1e-cilium-config-path\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177459 kubelet[2492]: I0702 01:54:42.177208 2492 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-cni-path\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177459 kubelet[2492]: I0702 01:54:42.177218 2492 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/294852f5-eec5-4860-8090-eb9124dccd1e-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.177459 kubelet[2492]: I0702 01:54:42.177227 2492 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/294852f5-eec5-4860-8090-eb9124dccd1e-clustermesh-secrets\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:42.194627 kubelet[2492]: I0702 01:54:42.194603 2492 scope.go:117] "RemoveContainer" containerID="6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db" Jul 2 01:54:42.204055 systemd[1]: Removed slice kubepods-besteffort-pod52f38591_058f_473e_acfb_794a90308783.slice. Jul 2 01:54:42.206820 systemd[1]: Removed slice kubepods-burstable-pod294852f5_eec5_4860_8090_eb9124dccd1e.slice. Jul 2 01:54:42.206906 systemd[1]: kubepods-burstable-pod294852f5_eec5_4860_8090_eb9124dccd1e.slice: Consumed 6.500s CPU time. Jul 2 01:54:42.208054 env[1451]: time="2024-07-02T01:54:42.207768943Z" level=info msg="RemoveContainer for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\"" Jul 2 01:54:42.222674 env[1451]: time="2024-07-02T01:54:42.222512140Z" level=info msg="RemoveContainer for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" returns successfully" Jul 2 01:54:42.223022 kubelet[2492]: I0702 01:54:42.223003 2492 scope.go:117] "RemoveContainer" containerID="6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db" Jul 2 01:54:42.223442 env[1451]: time="2024-07-02T01:54:42.223333862Z" level=error msg="ContainerStatus for \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\": not found" Jul 2 01:54:42.223612 kubelet[2492]: E0702 01:54:42.223598 2492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\": not found" containerID="6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db" Jul 2 01:54:42.223785 kubelet[2492]: I0702 01:54:42.223772 2492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db"} err="failed to get container status \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e6c4e447a9cce917ebe48fc4380af148adea8b60414d44c6483979572f651db\": not found" Jul 2 01:54:42.223860 kubelet[2492]: I0702 01:54:42.223850 2492 scope.go:117] "RemoveContainer" containerID="6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5" Jul 2 01:54:42.225736 env[1451]: time="2024-07-02T01:54:42.225470227Z" level=info msg="RemoveContainer for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\"" Jul 2 01:54:42.238414 env[1451]: time="2024-07-02T01:54:42.238296179Z" level=info msg="RemoveContainer for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" returns successfully" Jul 2 01:54:42.238601 kubelet[2492]: I0702 01:54:42.238555 2492 scope.go:117] "RemoveContainer" containerID="1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086" Jul 2 01:54:42.239888 env[1451]: time="2024-07-02T01:54:42.239852422Z" level=info msg="RemoveContainer for \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\"" Jul 2 01:54:42.250416 env[1451]: time="2024-07-02T01:54:42.250367648Z" level=info msg="RemoveContainer for \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\" returns successfully" Jul 2 01:54:42.250679 kubelet[2492]: I0702 01:54:42.250649 2492 scope.go:117] "RemoveContainer" containerID="a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c" Jul 2 01:54:42.251910 env[1451]: time="2024-07-02T01:54:42.251878532Z" level=info msg="RemoveContainer for \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\"" Jul 2 01:54:42.260532 env[1451]: time="2024-07-02T01:54:42.260485313Z" level=info msg="RemoveContainer for \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\" returns successfully" Jul 2 01:54:42.260803 kubelet[2492]: I0702 01:54:42.260784 2492 scope.go:117] "RemoveContainer" containerID="712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0" Jul 2 01:54:42.261996 env[1451]: time="2024-07-02T01:54:42.261960797Z" level=info msg="RemoveContainer for \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\"" Jul 2 01:54:42.272575 env[1451]: time="2024-07-02T01:54:42.272523623Z" level=info msg="RemoveContainer for \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\" returns successfully" Jul 2 01:54:42.272915 kubelet[2492]: I0702 01:54:42.272872 2492 scope.go:117] "RemoveContainer" containerID="630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3" Jul 2 01:54:42.274235 env[1451]: time="2024-07-02T01:54:42.274207387Z" level=info msg="RemoveContainer for \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\"" Jul 2 01:54:42.284156 env[1451]: time="2024-07-02T01:54:42.284116611Z" level=info msg="RemoveContainer for \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\" returns successfully" Jul 2 01:54:42.284541 kubelet[2492]: I0702 01:54:42.284522 2492 scope.go:117] "RemoveContainer" containerID="6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5" Jul 2 01:54:42.284948 env[1451]: time="2024-07-02T01:54:42.284873693Z" level=error msg="ContainerStatus for \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\": not found" Jul 2 01:54:42.285109 kubelet[2492]: E0702 01:54:42.285096 2492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\": not found" containerID="6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5" Jul 2 01:54:42.285225 kubelet[2492]: I0702 01:54:42.285216 2492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5"} err="failed to get container status \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c0b2684bd49cac78ffceb5499455b975f57a65aee26bd1c93028ff69f9357a5\": not found" Jul 2 01:54:42.285290 kubelet[2492]: I0702 01:54:42.285280 2492 scope.go:117] "RemoveContainer" containerID="1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086" Jul 2 01:54:42.285709 env[1451]: time="2024-07-02T01:54:42.285660175Z" level=error msg="ContainerStatus for \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\": not found" Jul 2 01:54:42.285925 kubelet[2492]: E0702 01:54:42.285912 2492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\": not found" containerID="1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086" Jul 2 01:54:42.286021 kubelet[2492]: I0702 01:54:42.286012 2492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086"} err="failed to get container status \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f668b2b752e8f2a4c5416b0a8c1833870a4b3b046c79cffffa27343e3e9d086\": not found" Jul 2 01:54:42.286095 kubelet[2492]: I0702 01:54:42.286086 2492 scope.go:117] "RemoveContainer" containerID="a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c" Jul 2 01:54:42.286362 env[1451]: time="2024-07-02T01:54:42.286292777Z" level=error msg="ContainerStatus for \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\": not found" Jul 2 01:54:42.286543 kubelet[2492]: E0702 01:54:42.286490 2492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\": not found" containerID="a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c" Jul 2 01:54:42.286656 kubelet[2492]: I0702 01:54:42.286645 2492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c"} err="failed to get container status \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3bbcc2e60e1f1c2c15575855dd22082b067c78d26679a103b4fec9a6488975c\": not found" Jul 2 01:54:42.286722 kubelet[2492]: I0702 01:54:42.286713 2492 scope.go:117] "RemoveContainer" containerID="712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0" Jul 2 01:54:42.287011 env[1451]: time="2024-07-02T01:54:42.286968579Z" level=error msg="ContainerStatus for \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\": not found" Jul 2 01:54:42.287226 kubelet[2492]: E0702 01:54:42.287204 2492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\": not found" containerID="712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0" Jul 2 01:54:42.287278 kubelet[2492]: I0702 01:54:42.287259 2492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0"} err="failed to get container status \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"712117bfa3359872fab53fcfed532925f846f64347923d5f9fdb54650cd607d0\": not found" Jul 2 01:54:42.287278 kubelet[2492]: I0702 01:54:42.287272 2492 scope.go:117] "RemoveContainer" containerID="630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3" Jul 2 01:54:42.287526 env[1451]: time="2024-07-02T01:54:42.287472620Z" level=error msg="ContainerStatus for \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\": not found" Jul 2 01:54:42.287782 kubelet[2492]: E0702 01:54:42.287742 2492 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\": not found" containerID="630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3" Jul 2 01:54:42.287840 kubelet[2492]: I0702 01:54:42.287793 2492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3"} err="failed to get container status \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"630e45c0d5a1a4fb20ceef5446023a4fd02dc7227749752d3c79d10c346988a3\": not found" Jul 2 01:54:42.809177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000-rootfs.mount: Deactivated successfully. Jul 2 01:54:42.809277 systemd[1]: var-lib-kubelet-pods-52f38591\x2d058f\x2d473e\x2dacfb\x2d794a90308783-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlfw25.mount: Deactivated successfully. Jul 2 01:54:42.809334 systemd[1]: var-lib-kubelet-pods-294852f5\x2deec5\x2d4860\x2d8090\x2deb9124dccd1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfr6rt.mount: Deactivated successfully. Jul 2 01:54:42.809391 systemd[1]: var-lib-kubelet-pods-294852f5\x2deec5\x2d4860\x2d8090\x2deb9124dccd1e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 01:54:42.809439 systemd[1]: var-lib-kubelet-pods-294852f5\x2deec5\x2d4860\x2d8090\x2deb9124dccd1e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 01:54:42.874746 kubelet[2492]: E0702 01:54:42.874691 2492 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 01:54:43.765578 kubelet[2492]: I0702 01:54:43.765538 2492 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" path="/var/lib/kubelet/pods/294852f5-eec5-4860-8090-eb9124dccd1e/volumes" Jul 2 01:54:43.766572 kubelet[2492]: I0702 01:54:43.766554 2492 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="52f38591-058f-473e-acfb-794a90308783" path="/var/lib/kubelet/pods/52f38591-058f-473e-acfb-794a90308783/volumes" Jul 2 01:54:43.847085 sshd[4022]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:43.849201 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:35902.service: Deactivated successfully. Jul 2 01:54:43.849971 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 01:54:43.850159 systemd[1]: session-23.scope: Consumed 1.487s CPU time. Jul 2 01:54:43.850545 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Jul 2 01:54:43.851401 systemd-logind[1438]: Removed session 23. Jul 2 01:54:43.918108 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:35916.service. Jul 2 01:54:44.349954 sshd[4184]: Accepted publickey for core from 10.200.16.10 port 35916 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:44.351544 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:44.355809 systemd[1]: Started session-24.scope. Jul 2 01:54:44.356352 systemd-logind[1438]: New session 24 of user core. Jul 2 01:54:46.342309 kubelet[2492]: I0702 01:54:46.342262 2492 topology_manager.go:215] "Topology Admit Handler" podUID="a2a3811d-6977-4f32-8c99-13bf1a17c6f7" podNamespace="kube-system" podName="cilium-xprpl" Jul 2 01:54:46.342667 kubelet[2492]: E0702 01:54:46.342332 2492 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" containerName="clean-cilium-state" Jul 2 01:54:46.342667 kubelet[2492]: E0702 01:54:46.342441 2492 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" containerName="cilium-agent" Jul 2 01:54:46.342667 kubelet[2492]: E0702 01:54:46.342452 2492 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" containerName="mount-cgroup" Jul 2 01:54:46.342667 kubelet[2492]: E0702 01:54:46.342473 2492 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" containerName="apply-sysctl-overwrites" Jul 2 01:54:46.342667 kubelet[2492]: E0702 01:54:46.342489 2492 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" containerName="mount-bpf-fs" Jul 2 01:54:46.342667 kubelet[2492]: E0702 01:54:46.342498 2492 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="52f38591-058f-473e-acfb-794a90308783" containerName="cilium-operator" Jul 2 01:54:46.342667 kubelet[2492]: I0702 01:54:46.342525 2492 memory_manager.go:346] "RemoveStaleState removing state" podUID="294852f5-eec5-4860-8090-eb9124dccd1e" containerName="cilium-agent" Jul 2 01:54:46.342667 kubelet[2492]: I0702 01:54:46.342532 2492 memory_manager.go:346] "RemoveStaleState removing state" podUID="52f38591-058f-473e-acfb-794a90308783" containerName="cilium-operator" Jul 2 01:54:46.347890 systemd[1]: Created slice kubepods-burstable-poda2a3811d_6977_4f32_8c99_13bf1a17c6f7.slice. Jul 2 01:54:46.354143 kubelet[2492]: W0702 01:54:46.354112 2492 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.5-a-267983ca13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-267983ca13' and this object Jul 2 01:54:46.354300 kubelet[2492]: E0702 01:54:46.354288 2492 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.5-a-267983ca13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-267983ca13' and this object Jul 2 01:54:46.354354 kubelet[2492]: W0702 01:54:46.354132 2492 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.5-a-267983ca13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-267983ca13' and this object Jul 2 01:54:46.354429 kubelet[2492]: E0702 01:54:46.354419 2492 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.5-a-267983ca13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-267983ca13' and this object Jul 2 01:54:46.355405 kubelet[2492]: W0702 01:54:46.355364 2492 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.5-a-267983ca13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-267983ca13' and this object Jul 2 01:54:46.355405 kubelet[2492]: E0702 01:54:46.355404 2492 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.5-a-267983ca13" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-267983ca13' and this object Jul 2 01:54:46.400210 kubelet[2492]: I0702 01:54:46.400170 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-run\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400362 kubelet[2492]: I0702 01:54:46.400224 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-etc-cni-netd\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400362 kubelet[2492]: I0702 01:54:46.400247 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-bpf-maps\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400362 kubelet[2492]: I0702 01:54:46.400264 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hubble-tls\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400362 kubelet[2492]: I0702 01:54:46.400295 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-ipsec-secrets\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400362 kubelet[2492]: I0702 01:54:46.400318 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-net\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400362 kubelet[2492]: I0702 01:54:46.400339 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cni-path\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400515 kubelet[2492]: I0702 01:54:46.400358 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-xtables-lock\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400515 kubelet[2492]: I0702 01:54:46.400389 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hostproc\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400515 kubelet[2492]: I0702 01:54:46.400409 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-clustermesh-secrets\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400515 kubelet[2492]: I0702 01:54:46.400428 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-config-path\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400515 kubelet[2492]: I0702 01:54:46.400456 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-kernel\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400515 kubelet[2492]: I0702 01:54:46.400479 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-lib-modules\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400675 kubelet[2492]: I0702 01:54:46.400498 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-cgroup\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.400675 kubelet[2492]: I0702 01:54:46.400526 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxg4q\" (UniqueName: \"kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-kube-api-access-gxg4q\") pod \"cilium-xprpl\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " pod="kube-system/cilium-xprpl" Jul 2 01:54:46.416270 sshd[4184]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:46.418869 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:35916.service: Deactivated successfully. Jul 2 01:54:46.419625 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 01:54:46.419825 systemd[1]: session-24.scope: Consumed 1.660s CPU time. Jul 2 01:54:46.420828 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Jul 2 01:54:46.421578 systemd-logind[1438]: Removed session 24. Jul 2 01:54:46.495787 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:35930.service. Jul 2 01:54:46.928425 sshd[4194]: Accepted publickey for core from 10.200.16.10 port 35930 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:46.929191 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:46.933966 systemd-logind[1438]: New session 25 of user core. Jul 2 01:54:46.934602 systemd[1]: Started session-25.scope. Jul 2 01:54:47.246605 kubelet[2492]: E0702 01:54:47.246505 2492 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-xprpl" podUID="a2a3811d-6977-4f32-8c99-13bf1a17c6f7" Jul 2 01:54:47.315263 sshd[4194]: pam_unix(sshd:session): session closed for user core Jul 2 01:54:47.318019 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Jul 2 01:54:47.318254 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:35930.service: Deactivated successfully. Jul 2 01:54:47.319279 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 01:54:47.320083 systemd-logind[1438]: Removed session 25. Jul 2 01:54:47.396284 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:35942.service. Jul 2 01:54:47.502872 kubelet[2492]: E0702 01:54:47.502338 2492 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 2 01:54:47.503184 kubelet[2492]: E0702 01:54:47.503160 2492 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-clustermesh-secrets podName:a2a3811d-6977-4f32-8c99-13bf1a17c6f7 nodeName:}" failed. No retries permitted until 2024-07-02 01:54:48.003137543 +0000 UTC m=+210.357655498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-clustermesh-secrets") pod "cilium-xprpl" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7") : failed to sync secret cache: timed out waiting for the condition Jul 2 01:54:47.503184 kubelet[2492]: E0702 01:54:47.503055 2492 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 01:54:47.503282 kubelet[2492]: E0702 01:54:47.503195 2492 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-xprpl: failed to sync secret cache: timed out waiting for the condition Jul 2 01:54:47.503282 kubelet[2492]: E0702 01:54:47.503225 2492 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hubble-tls podName:a2a3811d-6977-4f32-8c99-13bf1a17c6f7 nodeName:}" failed. No retries permitted until 2024-07-02 01:54:48.003217824 +0000 UTC m=+210.357735779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hubble-tls") pod "cilium-xprpl" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7") : failed to sync secret cache: timed out waiting for the condition Jul 2 01:54:47.862018 sshd[4209]: Accepted publickey for core from 10.200.16.10 port 35942 ssh2: RSA SHA256:dIfkHgYeMkxYvU2An9TnjkrclLrmoTNY/YaaZP40c9o Jul 2 01:54:47.863622 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 01:54:47.867356 systemd-logind[1438]: New session 26 of user core. Jul 2 01:54:47.867847 systemd[1]: Started session-26.scope. Jul 2 01:54:47.876208 kubelet[2492]: E0702 01:54:47.876179 2492 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 01:54:48.313789 kubelet[2492]: I0702 01:54:48.313746 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxg4q\" (UniqueName: \"kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-kube-api-access-gxg4q\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.313964 kubelet[2492]: I0702 01:54:48.313952 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-xtables-lock\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314055 kubelet[2492]: I0702 01:54:48.314046 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-cgroup\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314142 kubelet[2492]: I0702 01:54:48.314133 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-etc-cni-netd\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314233 kubelet[2492]: I0702 01:54:48.314224 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-config-path\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314337 kubelet[2492]: I0702 01:54:48.314327 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hubble-tls\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314496 kubelet[2492]: I0702 01:54:48.314486 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-net\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314586 kubelet[2492]: I0702 01:54:48.314577 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hostproc\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314668 kubelet[2492]: I0702 01:54:48.314659 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-bpf-maps\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314763 kubelet[2492]: I0702 01:54:48.314741 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-ipsec-secrets\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314864 kubelet[2492]: I0702 01:54:48.314854 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-kernel\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.314953 kubelet[2492]: I0702 01:54:48.314944 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-lib-modules\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.315037 kubelet[2492]: I0702 01:54:48.315029 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cni-path\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.315128 kubelet[2492]: I0702 01:54:48.315116 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-clustermesh-secrets\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.315211 kubelet[2492]: I0702 01:54:48.315202 2492 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-run\") pod \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\" (UID: \"a2a3811d-6977-4f32-8c99-13bf1a17c6f7\") " Jul 2 01:54:48.315321 kubelet[2492]: I0702 01:54:48.315308 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.315909 kubelet[2492]: I0702 01:54:48.315887 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316010 kubelet[2492]: I0702 01:54:48.314495 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316075 kubelet[2492]: I0702 01:54:48.314514 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316142 kubelet[2492]: I0702 01:54:48.316115 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 01:54:48.316142 kubelet[2492]: I0702 01:54:48.314526 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316218 kubelet[2492]: I0702 01:54:48.316162 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316218 kubelet[2492]: I0702 01:54:48.316179 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316302 kubelet[2492]: I0702 01:54:48.316288 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316382 kubelet[2492]: I0702 01:54:48.316370 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.316462 kubelet[2492]: I0702 01:54:48.316450 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 01:54:48.319639 systemd[1]: var-lib-kubelet-pods-a2a3811d\x2d6977\x2d4f32\x2d8c99\x2d13bf1a17c6f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxg4q.mount: Deactivated successfully. Jul 2 01:54:48.323608 kubelet[2492]: I0702 01:54:48.321135 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 01:54:48.323608 kubelet[2492]: I0702 01:54:48.321214 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-kube-api-access-gxg4q" (OuterVolumeSpecName: "kube-api-access-gxg4q") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "kube-api-access-gxg4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:48.324552 systemd[1]: var-lib-kubelet-pods-a2a3811d\x2d6977\x2d4f32\x2d8c99\x2d13bf1a17c6f7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 01:54:48.325934 kubelet[2492]: I0702 01:54:48.325666 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 01:54:48.325934 kubelet[2492]: I0702 01:54:48.325732 2492 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2a3811d-6977-4f32-8c99-13bf1a17c6f7" (UID: "a2a3811d-6977-4f32-8c99-13bf1a17c6f7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 01:54:48.324644 systemd[1]: var-lib-kubelet-pods-a2a3811d\x2d6977\x2d4f32\x2d8c99\x2d13bf1a17c6f7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 01:54:48.324697 systemd[1]: var-lib-kubelet-pods-a2a3811d\x2d6977\x2d4f32\x2d8c99\x2d13bf1a17c6f7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 01:54:48.415483 kubelet[2492]: I0702 01:54:48.415443 2492 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cni-path\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.415689 kubelet[2492]: I0702 01:54:48.415679 2492 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-clustermesh-secrets\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.415783 kubelet[2492]: I0702 01:54:48.415748 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-run\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.415850 kubelet[2492]: I0702 01:54:48.415841 2492 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gxg4q\" (UniqueName: \"kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-kube-api-access-gxg4q\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.415911 kubelet[2492]: I0702 01:54:48.415903 2492 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-xtables-lock\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.415971 kubelet[2492]: I0702 01:54:48.415963 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-cgroup\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416030 kubelet[2492]: I0702 01:54:48.416022 2492 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-etc-cni-netd\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416088 kubelet[2492]: I0702 01:54:48.416079 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-config-path\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416163 kubelet[2492]: I0702 01:54:48.416139 2492 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hubble-tls\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416226 kubelet[2492]: I0702 01:54:48.416217 2492 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-net\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416285 kubelet[2492]: I0702 01:54:48.416277 2492 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-hostproc\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416341 kubelet[2492]: I0702 01:54:48.416333 2492 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-bpf-maps\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416396 kubelet[2492]: I0702 01:54:48.416388 2492 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416454 kubelet[2492]: I0702 01:54:48.416446 2492 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:48.416513 kubelet[2492]: I0702 01:54:48.416505 2492 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2a3811d-6977-4f32-8c99-13bf1a17c6f7-lib-modules\") on node \"ci-3510.3.5-a-267983ca13\" DevicePath \"\"" Jul 2 01:54:49.213796 systemd[1]: Removed slice kubepods-burstable-poda2a3811d_6977_4f32_8c99_13bf1a17c6f7.slice. Jul 2 01:54:49.251528 kubelet[2492]: I0702 01:54:49.251479 2492 topology_manager.go:215] "Topology Admit Handler" podUID="e110ba58-4270-4964-9dd8-ee3d8c39fcc1" podNamespace="kube-system" podName="cilium-25nhs" Jul 2 01:54:49.256482 systemd[1]: Created slice kubepods-burstable-pode110ba58_4270_4964_9dd8_ee3d8c39fcc1.slice. Jul 2 01:54:49.320449 kubelet[2492]: I0702 01:54:49.320361 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-etc-cni-netd\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320593 kubelet[2492]: I0702 01:54:49.320469 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-clustermesh-secrets\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320593 kubelet[2492]: I0702 01:54:49.320493 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-hubble-tls\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320593 kubelet[2492]: I0702 01:54:49.320513 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-cilium-run\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320593 kubelet[2492]: I0702 01:54:49.320545 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trktk\" (UniqueName: \"kubernetes.io/projected/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-kube-api-access-trktk\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320593 kubelet[2492]: I0702 01:54:49.320566 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-cni-path\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320593 kubelet[2492]: I0702 01:54:49.320583 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-lib-modules\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320741 kubelet[2492]: I0702 01:54:49.320602 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-hostproc\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320741 kubelet[2492]: I0702 01:54:49.320632 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-cilium-config-path\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320741 kubelet[2492]: I0702 01:54:49.320652 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-xtables-lock\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320741 kubelet[2492]: I0702 01:54:49.320669 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-cilium-ipsec-secrets\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320741 kubelet[2492]: I0702 01:54:49.320696 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-host-proc-sys-kernel\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320741 kubelet[2492]: I0702 01:54:49.320716 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-bpf-maps\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320911 kubelet[2492]: I0702 01:54:49.320734 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-cilium-cgroup\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.320911 kubelet[2492]: I0702 01:54:49.320762 2492 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e110ba58-4270-4964-9dd8-ee3d8c39fcc1-host-proc-sys-net\") pod \"cilium-25nhs\" (UID: \"e110ba58-4270-4964-9dd8-ee3d8c39fcc1\") " pod="kube-system/cilium-25nhs" Jul 2 01:54:49.559557 env[1451]: time="2024-07-02T01:54:49.559448303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25nhs,Uid:e110ba58-4270-4964-9dd8-ee3d8c39fcc1,Namespace:kube-system,Attempt:0,}" Jul 2 01:54:49.602063 env[1451]: time="2024-07-02T01:54:49.601979073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 01:54:49.602299 env[1451]: time="2024-07-02T01:54:49.602040473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 01:54:49.602299 env[1451]: time="2024-07-02T01:54:49.602051313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 01:54:49.603051 env[1451]: time="2024-07-02T01:54:49.603016075Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612 pid=4236 runtime=io.containerd.runc.v2 Jul 2 01:54:49.614301 systemd[1]: Started cri-containerd-4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612.scope. Jul 2 01:54:49.643208 env[1451]: time="2024-07-02T01:54:49.643171840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25nhs,Uid:e110ba58-4270-4964-9dd8-ee3d8c39fcc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\"" Jul 2 01:54:49.645991 env[1451]: time="2024-07-02T01:54:49.645963886Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 01:54:49.687714 env[1451]: time="2024-07-02T01:54:49.687671814Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06\"" Jul 2 01:54:49.688350 env[1451]: time="2024-07-02T01:54:49.688314816Z" level=info msg="StartContainer for \"864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06\"" Jul 2 01:54:49.703311 systemd[1]: Started cri-containerd-864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06.scope. Jul 2 01:54:49.735173 env[1451]: time="2024-07-02T01:54:49.735129955Z" level=info msg="StartContainer for \"864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06\" returns successfully" Jul 2 01:54:49.735583 systemd[1]: cri-containerd-864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06.scope: Deactivated successfully. Jul 2 01:54:49.766449 kubelet[2492]: I0702 01:54:49.766424 2492 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a2a3811d-6977-4f32-8c99-13bf1a17c6f7" path="/var/lib/kubelet/pods/a2a3811d-6977-4f32-8c99-13bf1a17c6f7/volumes" Jul 2 01:54:49.813416 env[1451]: time="2024-07-02T01:54:49.813376360Z" level=info msg="shim disconnected" id=864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06 Jul 2 01:54:49.813671 env[1451]: time="2024-07-02T01:54:49.813644160Z" level=warning msg="cleaning up after shim disconnected" id=864caf9d29d6f47681ef3db0f8944fcb1bbc1432b6980f9e34310cfc8c488e06 namespace=k8s.io Jul 2 01:54:49.813771 env[1451]: time="2024-07-02T01:54:49.813736441Z" level=info msg="cleaning up dead shim" Jul 2 01:54:49.820983 env[1451]: time="2024-07-02T01:54:49.820947776Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4322 runtime=io.containerd.runc.v2\n" Jul 2 01:54:50.215490 env[1451]: time="2024-07-02T01:54:50.215395959Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 01:54:50.253856 env[1451]: time="2024-07-02T01:54:50.253812559Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0\"" Jul 2 01:54:50.254468 env[1451]: time="2024-07-02T01:54:50.254440520Z" level=info msg="StartContainer for \"ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0\"" Jul 2 01:54:50.267351 systemd[1]: Started cri-containerd-ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0.scope. Jul 2 01:54:50.296709 systemd[1]: cri-containerd-ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0.scope: Deactivated successfully. Jul 2 01:54:50.297456 env[1451]: time="2024-07-02T01:54:50.297422849Z" level=info msg="StartContainer for \"ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0\" returns successfully" Jul 2 01:54:50.332348 env[1451]: time="2024-07-02T01:54:50.332305161Z" level=info msg="shim disconnected" id=ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0 Jul 2 01:54:50.332621 env[1451]: time="2024-07-02T01:54:50.332603001Z" level=warning msg="cleaning up after shim disconnected" id=ebb1a2f548dd27d1ebfb509758462bd7b3ca8ae8b4c2e1cacef0eaa31830c4d0 namespace=k8s.io Jul 2 01:54:50.332689 env[1451]: time="2024-07-02T01:54:50.332676721Z" level=info msg="cleaning up dead shim" Jul 2 01:54:50.338693 env[1451]: time="2024-07-02T01:54:50.338664934Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4383 runtime=io.containerd.runc.v2\n" Jul 2 01:54:51.217839 env[1451]: time="2024-07-02T01:54:51.217797780Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 01:54:51.261724 env[1451]: time="2024-07-02T01:54:51.261675269Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1\"" Jul 2 01:54:51.262211 env[1451]: time="2024-07-02T01:54:51.262188910Z" level=info msg="StartContainer for \"c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1\"" Jul 2 01:54:51.279187 systemd[1]: Started cri-containerd-c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1.scope. Jul 2 01:54:51.311938 systemd[1]: cri-containerd-c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1.scope: Deactivated successfully. Jul 2 01:54:51.316545 env[1451]: time="2024-07-02T01:54:51.316506259Z" level=info msg="StartContainer for \"c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1\" returns successfully" Jul 2 01:54:51.344479 env[1451]: time="2024-07-02T01:54:51.344370396Z" level=info msg="shim disconnected" id=c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1 Jul 2 01:54:51.344669 env[1451]: time="2024-07-02T01:54:51.344650596Z" level=warning msg="cleaning up after shim disconnected" id=c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1 namespace=k8s.io Jul 2 01:54:51.344746 env[1451]: time="2024-07-02T01:54:51.344733516Z" level=info msg="cleaning up dead shim" Jul 2 01:54:51.351960 env[1451]: time="2024-07-02T01:54:51.351923531Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4439 runtime=io.containerd.runc.v2\n" Jul 2 01:54:51.434370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8ff06814c1ec916556b2a21c4323343650a8cfa488e22de6232fb4de6a60fa1-rootfs.mount: Deactivated successfully. Jul 2 01:54:51.786746 kubelet[2492]: I0702 01:54:51.786718 2492 setters.go:552] "Node became not ready" node="ci-3510.3.5-a-267983ca13" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T01:54:51Z","lastTransitionTime":"2024-07-02T01:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 01:54:52.221784 env[1451]: time="2024-07-02T01:54:52.221630477Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 01:54:52.267241 env[1451]: time="2024-07-02T01:54:52.267187367Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248\"" Jul 2 01:54:52.267975 env[1451]: time="2024-07-02T01:54:52.267943849Z" level=info msg="StartContainer for \"5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248\"" Jul 2 01:54:52.281598 systemd[1]: Started cri-containerd-5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248.scope. Jul 2 01:54:52.307146 systemd[1]: cri-containerd-5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248.scope: Deactivated successfully. Jul 2 01:54:52.309443 env[1451]: time="2024-07-02T01:54:52.308900169Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode110ba58_4270_4964_9dd8_ee3d8c39fcc1.slice/cri-containerd-5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248.scope/memory.events\": no such file or directory" Jul 2 01:54:52.313729 env[1451]: time="2024-07-02T01:54:52.313692979Z" level=info msg="StartContainer for \"5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248\" returns successfully" Jul 2 01:54:52.346723 env[1451]: time="2024-07-02T01:54:52.346678764Z" level=info msg="shim disconnected" id=5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248 Jul 2 01:54:52.346723 env[1451]: time="2024-07-02T01:54:52.346723724Z" level=warning msg="cleaning up after shim disconnected" id=5a411aea95fdb36816b70a82d7c130a80635b1dea515d6e120cc5a24479de248 namespace=k8s.io Jul 2 01:54:52.347000 env[1451]: time="2024-07-02T01:54:52.346734044Z" level=info msg="cleaning up dead shim" Jul 2 01:54:52.353644 env[1451]: time="2024-07-02T01:54:52.353599178Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:54:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4495 runtime=io.containerd.runc.v2\n" Jul 2 01:54:52.877218 kubelet[2492]: E0702 01:54:52.877187 2492 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 01:54:53.225685 env[1451]: time="2024-07-02T01:54:53.225582449Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 01:54:53.270566 env[1451]: time="2024-07-02T01:54:53.270515215Z" level=info msg="CreateContainer within sandbox \"4ee17b1ca889574200efe12b6a9c5447e93065d56d8333ad85b0be330f20d612\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096\"" Jul 2 01:54:53.272312 env[1451]: time="2024-07-02T01:54:53.271366257Z" level=info msg="StartContainer for \"65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096\"" Jul 2 01:54:53.290654 systemd[1]: Started cri-containerd-65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096.scope. Jul 2 01:54:53.326781 env[1451]: time="2024-07-02T01:54:53.322826956Z" level=info msg="StartContainer for \"65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096\" returns successfully" Jul 2 01:54:53.772933 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 01:54:54.349299 systemd[1]: run-containerd-runc-k8s.io-65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096-runc.UAWoAZ.mount: Deactivated successfully. Jul 2 01:54:56.218681 systemd-networkd[1604]: lxc_health: Link UP Jul 2 01:54:56.235863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 01:54:56.236163 systemd-networkd[1604]: lxc_health: Gained carrier Jul 2 01:54:56.477731 systemd[1]: run-containerd-runc-k8s.io-65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096-runc.OhXDet.mount: Deactivated successfully. Jul 2 01:54:57.591865 kubelet[2492]: I0702 01:54:57.591825 2492 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-25nhs" podStartSLOduration=8.591787828 podCreationTimestamp="2024-07-02 01:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 01:54:54.244629604 +0000 UTC m=+216.599147559" watchObservedRunningTime="2024-07-02 01:54:57.591787828 +0000 UTC m=+219.946305783" Jul 2 01:54:57.679904 systemd-networkd[1604]: lxc_health: Gained IPv6LL Jul 2 01:54:58.630028 systemd[1]: run-containerd-runc-k8s.io-65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096-runc.3IckHa.mount: Deactivated successfully. Jul 2 01:55:00.780805 systemd[1]: run-containerd-runc-k8s.io-65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096-runc.OH5HLf.mount: Deactivated successfully. Jul 2 01:55:02.955560 systemd[1]: run-containerd-runc-k8s.io-65b61ea86c207aa8a6bf77d5ce12137484e11fd6431787e641946dc182e41096-runc.yuzBO3.mount: Deactivated successfully. Jul 2 01:55:03.092315 sshd[4209]: pam_unix(sshd:session): session closed for user core Jul 2 01:55:03.095260 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Jul 2 01:55:03.096042 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 01:55:03.097177 systemd-logind[1438]: Removed session 26. Jul 2 01:55:03.097575 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:35942.service: Deactivated successfully. Jul 2 01:55:17.769213 env[1451]: time="2024-07-02T01:55:17.769159265Z" level=info msg="StopPodSandbox for \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\"" Jul 2 01:55:17.769593 env[1451]: time="2024-07-02T01:55:17.769260865Z" level=info msg="TearDown network for sandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" successfully" Jul 2 01:55:17.769593 env[1451]: time="2024-07-02T01:55:17.769305025Z" level=info msg="StopPodSandbox for \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" returns successfully" Jul 2 01:55:17.771295 env[1451]: time="2024-07-02T01:55:17.770115866Z" level=info msg="RemovePodSandbox for \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\"" Jul 2 01:55:17.771295 env[1451]: time="2024-07-02T01:55:17.770146386Z" level=info msg="Forcibly stopping sandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\"" Jul 2 01:55:17.771295 env[1451]: time="2024-07-02T01:55:17.770232386Z" level=info msg="TearDown network for sandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" successfully" Jul 2 01:55:17.782060 env[1451]: time="2024-07-02T01:55:17.781878958Z" level=info msg="RemovePodSandbox \"4054fe9b7210dbdf1feca1e604d711988538f576afe85dbc1428be19a016df9a\" returns successfully" Jul 2 01:55:17.783423 env[1451]: time="2024-07-02T01:55:17.783195280Z" level=info msg="StopPodSandbox for \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\"" Jul 2 01:55:17.783423 env[1451]: time="2024-07-02T01:55:17.783300680Z" level=info msg="TearDown network for sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" successfully" Jul 2 01:55:17.783423 env[1451]: time="2024-07-02T01:55:17.783332840Z" level=info msg="StopPodSandbox for \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" returns successfully" Jul 2 01:55:17.783902 env[1451]: time="2024-07-02T01:55:17.783739720Z" level=info msg="RemovePodSandbox for \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\"" Jul 2 01:55:17.783902 env[1451]: time="2024-07-02T01:55:17.783820800Z" level=info msg="Forcibly stopping sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\"" Jul 2 01:55:17.784033 env[1451]: time="2024-07-02T01:55:17.783916641Z" level=info msg="TearDown network for sandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" successfully" Jul 2 01:55:17.792773 env[1451]: time="2024-07-02T01:55:17.792716210Z" level=info msg="RemovePodSandbox \"c850f411d3766e61085b0441d63c41d1ae59a054e9a9d463dc17bcb7d041a000\" returns successfully" Jul 2 01:55:18.165323 systemd[1]: cri-containerd-82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624.scope: Deactivated successfully. Jul 2 01:55:18.165630 systemd[1]: cri-containerd-82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624.scope: Consumed 2.297s CPU time. Jul 2 01:55:18.166592 kubelet[2492]: E0702 01:55:18.166570 2492 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.40:47608->10.200.20.30:2379: read: connection timed out" Jul 2 01:55:18.185083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624-rootfs.mount: Deactivated successfully. Jul 2 01:55:18.207278 env[1451]: time="2024-07-02T01:55:18.207227515Z" level=info msg="shim disconnected" id=82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624 Jul 2 01:55:18.207278 env[1451]: time="2024-07-02T01:55:18.207272795Z" level=warning msg="cleaning up after shim disconnected" id=82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624 namespace=k8s.io Jul 2 01:55:18.207278 env[1451]: time="2024-07-02T01:55:18.207283275Z" level=info msg="cleaning up dead shim" Jul 2 01:55:18.213920 env[1451]: time="2024-07-02T01:55:18.213869041Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5199 runtime=io.containerd.runc.v2\n" Jul 2 01:55:18.277378 kubelet[2492]: I0702 01:55:18.277352 2492 scope.go:117] "RemoveContainer" containerID="82604f5127395ac2c2aa876a0adc79c953284d2db7abd87f53d54bdc51c2e624" Jul 2 01:55:18.280851 env[1451]: time="2024-07-02T01:55:18.280800469Z" level=info msg="CreateContainer within sandbox \"ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 01:55:18.309188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050083221.mount: Deactivated successfully. Jul 2 01:55:18.324701 env[1451]: time="2024-07-02T01:55:18.324624713Z" level=info msg="CreateContainer within sandbox \"ce9d8f1a894f4267e613cae379e733bb95a2a5785529ff6b603db82afdeb62dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5bc60e083825fd83e6028acbbd84434fbd4d377c51f4332714d70418fc2a642e\"" Jul 2 01:55:18.325222 env[1451]: time="2024-07-02T01:55:18.325197354Z" level=info msg="StartContainer for \"5bc60e083825fd83e6028acbbd84434fbd4d377c51f4332714d70418fc2a642e\"" Jul 2 01:55:18.344101 systemd[1]: Started cri-containerd-5bc60e083825fd83e6028acbbd84434fbd4d377c51f4332714d70418fc2a642e.scope. Jul 2 01:55:18.391127 env[1451]: time="2024-07-02T01:55:18.391063980Z" level=info msg="StartContainer for \"5bc60e083825fd83e6028acbbd84434fbd4d377c51f4332714d70418fc2a642e\" returns successfully" Jul 2 01:55:19.422210 systemd[1]: cri-containerd-e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f.scope: Deactivated successfully. Jul 2 01:55:19.422516 systemd[1]: cri-containerd-e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f.scope: Consumed 3.813s CPU time. Jul 2 01:55:19.441310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f-rootfs.mount: Deactivated successfully. Jul 2 01:55:19.459774 env[1451]: time="2024-07-02T01:55:19.459710686Z" level=info msg="shim disconnected" id=e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f Jul 2 01:55:19.459774 env[1451]: time="2024-07-02T01:55:19.459777406Z" level=warning msg="cleaning up after shim disconnected" id=e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f namespace=k8s.io Jul 2 01:55:19.460235 env[1451]: time="2024-07-02T01:55:19.459787846Z" level=info msg="cleaning up dead shim" Jul 2 01:55:19.467082 env[1451]: time="2024-07-02T01:55:19.467042174Z" level=warning msg="cleanup warnings time=\"2024-07-02T01:55:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5262 runtime=io.containerd.runc.v2\n" Jul 2 01:55:20.283703 kubelet[2492]: I0702 01:55:20.283672 2492 scope.go:117] "RemoveContainer" containerID="e8a2d784c1cf8ea62f4deaa2944c3071ae766b8ac224abdcdebb7e4b9e48ff1f" Jul 2 01:55:20.286022 env[1451]: time="2024-07-02T01:55:20.285982368Z" level=info msg="CreateContainer within sandbox \"94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 01:55:20.319109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451122679.mount: Deactivated successfully. Jul 2 01:55:20.340309 env[1451]: time="2024-07-02T01:55:20.340260340Z" level=info msg="CreateContainer within sandbox \"94d57fe54934024f2093022114b3c0539bedd192aa2c932de0b7946a55e61c1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2720052c68af14a9cf3aad4283674a06d0079438bc73da16707854c3784fdb76\"" Jul 2 01:55:20.341063 env[1451]: time="2024-07-02T01:55:20.341039460Z" level=info msg="StartContainer for \"2720052c68af14a9cf3aad4283674a06d0079438bc73da16707854c3784fdb76\"" Jul 2 01:55:20.359977 systemd[1]: Started cri-containerd-2720052c68af14a9cf3aad4283674a06d0079438bc73da16707854c3784fdb76.scope. Jul 2 01:55:20.406609 env[1451]: time="2024-07-02T01:55:20.406560163Z" level=info msg="StartContainer for \"2720052c68af14a9cf3aad4283674a06d0079438bc73da16707854c3784fdb76\" returns successfully" Jul 2 01:55:22.833001 kubelet[2492]: E0702 01:55:22.832871 2492 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.5-a-267983ca13.17de42944a22a962", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.5-a-267983ca13", UID:"f85d28cd6ab984b39bec3be85cc2e062", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-267983ca13"}, FirstTimestamp:time.Date(2024, time.July, 2, 1, 55, 12, 368486754, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 1, 55, 12, 368486754, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-267983ca13"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.40:47408->10.200.20.30:2379: read: connection timed out' (will not retry!) Jul 2 01:55:28.167693 kubelet[2492]: E0702 01:55:28.167633 2492 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-267983ca13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 01:55:28.720807 kubelet[2492]: I0702 01:55:28.720748 2492 status_manager.go:853] "Failed to get status for pod" podUID="06d89e0ebe90815a05dd039fad2e4dd8" pod="kube-system/kube-scheduler-ci-3510.3.5-a-267983ca13" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.40:47510->10.200.20.30:2379: read: connection timed out" Jul 2 01:55:29.123778 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.138238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.154008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.170139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.185058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.200972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.215822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.215985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.216109 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.231041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.231203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.253098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.253362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.253478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.267882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.268147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.282732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.282924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.297303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.297560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.312769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.312980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.328574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.328867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.345783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.346428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.369070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.369320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.376977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.377152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.392844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.393093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.408156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.408347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.423510 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.431284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.438963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.439131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.454453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.454688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.469936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.478005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.485613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.485803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.501162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.510289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.517832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.525501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.541185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.556869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.572322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.579864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.595302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.595409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.603158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.612694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.612840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.612950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.613047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.613149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.625389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.633010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.640544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.640738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.655677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.655905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.671374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.671607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.686708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.702014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.702177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.702299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.716828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.717020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.732281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.732460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.747491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.747720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.762976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.779745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.787300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.787406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.807784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.808044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.808162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.818155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.818363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.833060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.840803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.855916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.856036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.856147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.871911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.872145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.887887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.896026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.912024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.912214 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.912324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.925955 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.926217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.940930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.948722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.964546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.964653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.967045 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.979723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.979961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.994694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:29.994936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.009669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.025523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.025734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.025883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.040470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.040711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.055500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.055724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.079006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.079255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.079378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.094024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.101314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.117944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.118119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.126308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.134241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.141877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.149494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.165396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.165563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.173877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.181894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.197783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.197919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.205800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.213330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.221460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 01:55:30.229223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#127 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001