Jul 2 02:29:17.267922 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 02:29:17.267941 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 02:29:17.267949 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 02:29:17.267956 kernel: printk: bootconsole [pl11] enabled Jul 2 02:29:17.267961 kernel: efi: EFI v2.70 by EDK II Jul 2 02:29:17.267967 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37b33f98 Jul 2 02:29:17.267973 kernel: random: crng init done Jul 2 02:29:17.267979 kernel: ACPI: Early table checksum verification disabled Jul 2 02:29:17.267984 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 02:29:17.267989 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.267995 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268002 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 02:29:17.268007 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268013 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268020 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268026 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268032 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268039 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268045 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 02:29:17.268051 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 02:29:17.268057 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 02:29:17.268063 kernel: NUMA: Failed to initialise from firmware Jul 2 02:29:17.268068 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 02:29:17.268074 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Jul 2 02:29:17.268080 kernel: Zone ranges: Jul 2 02:29:17.268086 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 02:29:17.268091 kernel: DMA32 empty Jul 2 02:29:17.268098 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 02:29:17.268104 kernel: Movable zone start for each node Jul 2 02:29:17.268110 kernel: Early memory node ranges Jul 2 02:29:17.268116 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 02:29:17.268121 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 02:29:17.268127 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 02:29:17.271210 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 02:29:17.271223 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 02:29:17.271229 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 02:29:17.271235 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 02:29:17.271241 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 02:29:17.271247 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 02:29:17.271257 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 02:29:17.271267 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 02:29:17.271273 kernel: psci: probing for conduit method from ACPI. Jul 2 02:29:17.271279 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 02:29:17.271286 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 02:29:17.271293 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 02:29:17.271299 kernel: psci: SMC Calling Convention v1.4 Jul 2 02:29:17.271305 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Jul 2 02:29:17.271311 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Jul 2 02:29:17.271318 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 02:29:17.271324 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 02:29:17.271330 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 02:29:17.271337 kernel: Detected PIPT I-cache on CPU0 Jul 2 02:29:17.271343 kernel: CPU features: detected: GIC system register CPU interface Jul 2 02:29:17.271349 kernel: CPU features: detected: Hardware dirty bit management Jul 2 02:29:17.271355 kernel: CPU features: detected: Spectre-BHB Jul 2 02:29:17.271362 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 02:29:17.271370 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 02:29:17.271376 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 02:29:17.271382 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 02:29:17.271388 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 02:29:17.271395 kernel: Policy zone: Normal Jul 2 02:29:17.271403 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 02:29:17.271410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 02:29:17.271416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 02:29:17.271423 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 02:29:17.271429 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 02:29:17.271436 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Jul 2 02:29:17.271443 kernel: Memory: 3990260K/4194160K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 203900K reserved, 0K cma-reserved) Jul 2 02:29:17.271450 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 02:29:17.271456 kernel: trace event string verifier disabled Jul 2 02:29:17.271462 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 02:29:17.271469 kernel: rcu: RCU event tracing is enabled. Jul 2 02:29:17.271475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 02:29:17.271482 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 02:29:17.271488 kernel: Tracing variant of Tasks RCU enabled. Jul 2 02:29:17.271494 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 02:29:17.271501 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 02:29:17.271509 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 02:29:17.271515 kernel: GICv3: 960 SPIs implemented Jul 2 02:29:17.271521 kernel: GICv3: 0 Extended SPIs implemented Jul 2 02:29:17.271527 kernel: GICv3: Distributor has no Range Selector support Jul 2 02:29:17.271533 kernel: Root IRQ handler: gic_handle_irq Jul 2 02:29:17.271539 kernel: GICv3: 16 PPIs implemented Jul 2 02:29:17.271545 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 02:29:17.271552 kernel: ITS: No ITS available, not enabling LPIs Jul 2 02:29:17.271558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 02:29:17.271565 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 02:29:17.271571 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 02:29:17.271577 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 02:29:17.271585 kernel: Console: colour dummy device 80x25 Jul 2 02:29:17.271592 kernel: printk: console [tty1] enabled Jul 2 02:29:17.271598 kernel: ACPI: Core revision 20210730 Jul 2 02:29:17.271605 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 02:29:17.271612 kernel: pid_max: default: 32768 minimum: 301 Jul 2 02:29:17.271618 kernel: LSM: Security Framework initializing Jul 2 02:29:17.271624 kernel: SELinux: Initializing. Jul 2 02:29:17.271631 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 02:29:17.271637 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 02:29:17.271645 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 02:29:17.271652 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 02:29:17.271658 kernel: rcu: Hierarchical SRCU implementation. Jul 2 02:29:17.271665 kernel: Remapping and enabling EFI services. Jul 2 02:29:17.271671 kernel: smp: Bringing up secondary CPUs ... Jul 2 02:29:17.271677 kernel: Detected PIPT I-cache on CPU1 Jul 2 02:29:17.271684 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 02:29:17.271690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 02:29:17.271696 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 02:29:17.271704 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 02:29:17.271710 kernel: SMP: Total of 2 processors activated. Jul 2 02:29:17.271717 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 02:29:17.271723 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 02:29:17.271730 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 02:29:17.271737 kernel: CPU features: detected: CRC32 instructions Jul 2 02:29:17.271743 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 02:29:17.271750 kernel: CPU features: detected: LSE atomic instructions Jul 2 02:29:17.271757 kernel: CPU features: detected: Privileged Access Never Jul 2 02:29:17.271764 kernel: CPU: All CPU(s) started at EL1 Jul 2 02:29:17.271771 kernel: alternatives: patching kernel code Jul 2 02:29:17.271782 kernel: devtmpfs: initialized Jul 2 02:29:17.271790 kernel: KASLR enabled Jul 2 02:29:17.271796 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 02:29:17.271803 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 02:29:17.271810 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 02:29:17.271817 kernel: SMBIOS 3.1.0 present. Jul 2 02:29:17.271823 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 02:29:17.271830 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 02:29:17.271838 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 02:29:17.271846 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 02:29:17.271852 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 02:29:17.271859 kernel: audit: initializing netlink subsys (disabled) Jul 2 02:29:17.271866 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Jul 2 02:29:17.271873 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 02:29:17.271880 kernel: cpuidle: using governor menu Jul 2 02:29:17.271888 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 02:29:17.271895 kernel: ASID allocator initialised with 32768 entries Jul 2 02:29:17.271901 kernel: ACPI: bus type PCI registered Jul 2 02:29:17.271908 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 02:29:17.271915 kernel: Serial: AMBA PL011 UART driver Jul 2 02:29:17.271922 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 02:29:17.271929 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 02:29:17.271936 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 02:29:17.271942 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 02:29:17.271950 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 02:29:17.271957 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 02:29:17.271964 kernel: ACPI: Added _OSI(Module Device) Jul 2 02:29:17.271971 kernel: ACPI: Added _OSI(Processor Device) Jul 2 02:29:17.271978 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 02:29:17.271985 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 02:29:17.271991 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 02:29:17.271998 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 02:29:17.272005 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 02:29:17.272013 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 02:29:17.272020 kernel: ACPI: Interpreter enabled Jul 2 02:29:17.272026 kernel: ACPI: Using GIC for interrupt routing Jul 2 02:29:17.272033 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 02:29:17.272040 kernel: printk: console [ttyAMA0] enabled Jul 2 02:29:17.272047 kernel: printk: bootconsole [pl11] disabled Jul 2 02:29:17.272054 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 02:29:17.272061 kernel: iommu: Default domain type: Translated Jul 2 02:29:17.272068 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 02:29:17.272076 kernel: vgaarb: loaded Jul 2 02:29:17.272083 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 02:29:17.272090 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 02:29:17.272096 kernel: PTP clock support registered Jul 2 02:29:17.272103 kernel: Registered efivars operations Jul 2 02:29:17.272110 kernel: No ACPI PMU IRQ for CPU0 Jul 2 02:29:17.272117 kernel: No ACPI PMU IRQ for CPU1 Jul 2 02:29:17.272123 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 02:29:17.272130 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 02:29:17.272171 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 02:29:17.272178 kernel: pnp: PnP ACPI init Jul 2 02:29:17.272185 kernel: pnp: PnP ACPI: found 0 devices Jul 2 02:29:17.272192 kernel: NET: Registered PF_INET protocol family Jul 2 02:29:17.272199 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 02:29:17.272206 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 02:29:17.272213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 02:29:17.272220 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 02:29:17.272227 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 02:29:17.272235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 02:29:17.272242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 02:29:17.272249 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 02:29:17.272256 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 02:29:17.272262 kernel: PCI: CLS 0 bytes, default 64 Jul 2 02:29:17.272269 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 02:29:17.272276 kernel: kvm [1]: HYP mode not available Jul 2 02:29:17.272283 kernel: Initialise system trusted keyrings Jul 2 02:29:17.272289 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 02:29:17.272297 kernel: Key type asymmetric registered Jul 2 02:29:17.272304 kernel: Asymmetric key parser 'x509' registered Jul 2 02:29:17.272311 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 02:29:17.272317 kernel: io scheduler mq-deadline registered Jul 2 02:29:17.272324 kernel: io scheduler kyber registered Jul 2 02:29:17.272331 kernel: io scheduler bfq registered Jul 2 02:29:17.272338 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 02:29:17.272345 kernel: thunder_xcv, ver 1.0 Jul 2 02:29:17.272352 kernel: thunder_bgx, ver 1.0 Jul 2 02:29:17.272360 kernel: nicpf, ver 1.0 Jul 2 02:29:17.272367 kernel: nicvf, ver 1.0 Jul 2 02:29:17.272505 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 02:29:17.272571 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T02:29:16 UTC (1719887356) Jul 2 02:29:17.272581 kernel: efifb: probing for efifb Jul 2 02:29:17.272588 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 02:29:17.272595 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 02:29:17.272601 kernel: efifb: scrolling: redraw Jul 2 02:29:17.272610 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 02:29:17.272618 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 02:29:17.272624 kernel: fb0: EFI VGA frame buffer device Jul 2 02:29:17.272631 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 02:29:17.272638 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 02:29:17.272645 kernel: NET: Registered PF_INET6 protocol family Jul 2 02:29:17.272652 kernel: Segment Routing with IPv6 Jul 2 02:29:17.272658 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 02:29:17.272665 kernel: NET: Registered PF_PACKET protocol family Jul 2 02:29:17.272673 kernel: Key type dns_resolver registered Jul 2 02:29:17.272680 kernel: registered taskstats version 1 Jul 2 02:29:17.272686 kernel: Loading compiled-in X.509 certificates Jul 2 02:29:17.272693 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 02:29:17.272700 kernel: Key type .fscrypt registered Jul 2 02:29:17.272706 kernel: Key type fscrypt-provisioning registered Jul 2 02:29:17.272714 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 02:29:17.272720 kernel: ima: Allocated hash algorithm: sha1 Jul 2 02:29:17.272727 kernel: ima: No architecture policies found Jul 2 02:29:17.272735 kernel: clk: Disabling unused clocks Jul 2 02:29:17.272742 kernel: Freeing unused kernel memory: 36352K Jul 2 02:29:17.272748 kernel: Run /init as init process Jul 2 02:29:17.272755 kernel: with arguments: Jul 2 02:29:17.272762 kernel: /init Jul 2 02:29:17.272768 kernel: with environment: Jul 2 02:29:17.272774 kernel: HOME=/ Jul 2 02:29:17.272781 kernel: TERM=linux Jul 2 02:29:17.272787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 02:29:17.272798 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 02:29:17.272807 systemd[1]: Detected virtualization microsoft. Jul 2 02:29:17.272815 systemd[1]: Detected architecture arm64. Jul 2 02:29:17.272822 systemd[1]: Running in initrd. Jul 2 02:29:17.272829 systemd[1]: No hostname configured, using default hostname. Jul 2 02:29:17.272836 systemd[1]: Hostname set to . Jul 2 02:29:17.272844 systemd[1]: Initializing machine ID from random generator. Jul 2 02:29:17.272853 systemd[1]: Queued start job for default target initrd.target. Jul 2 02:29:17.272860 systemd[1]: Started systemd-ask-password-console.path. Jul 2 02:29:17.272867 systemd[1]: Reached target cryptsetup.target. Jul 2 02:29:17.272875 systemd[1]: Reached target paths.target. Jul 2 02:29:17.272882 systemd[1]: Reached target slices.target. Jul 2 02:29:17.272889 systemd[1]: Reached target swap.target. Jul 2 02:29:17.272896 systemd[1]: Reached target timers.target. Jul 2 02:29:17.272904 systemd[1]: Listening on iscsid.socket. Jul 2 02:29:17.272912 systemd[1]: Listening on iscsiuio.socket. Jul 2 02:29:17.272920 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 02:29:17.272927 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 02:29:17.272934 systemd[1]: Listening on systemd-journald.socket. Jul 2 02:29:17.272942 systemd[1]: Listening on systemd-networkd.socket. Jul 2 02:29:17.272949 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 02:29:17.272956 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 02:29:17.272963 systemd[1]: Reached target sockets.target. Jul 2 02:29:17.272971 systemd[1]: Starting kmod-static-nodes.service... Jul 2 02:29:17.272979 systemd[1]: Finished network-cleanup.service. Jul 2 02:29:17.272987 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 02:29:17.272994 systemd[1]: Starting systemd-journald.service... Jul 2 02:29:17.273001 systemd[1]: Starting systemd-modules-load.service... Jul 2 02:29:17.273009 systemd[1]: Starting systemd-resolved.service... Jul 2 02:29:17.273020 systemd-journald[275]: Journal started Jul 2 02:29:17.273059 systemd-journald[275]: Runtime Journal (/run/log/journal/deee3b5c75674297a6b91ae9ddaeeca4) is 8.0M, max 78.6M, 70.6M free. Jul 2 02:29:17.260901 systemd-modules-load[276]: Inserted module 'overlay' Jul 2 02:29:17.300868 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 02:29:17.300921 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 02:29:17.310750 systemd-modules-load[276]: Inserted module 'br_netfilter' Jul 2 02:29:17.320056 kernel: Bridge firewalling registered Jul 2 02:29:17.320079 systemd[1]: Started systemd-journald.service. Jul 2 02:29:17.332946 systemd-resolved[277]: Positive Trust Anchors: Jul 2 02:29:17.354841 kernel: SCSI subsystem initialized Jul 2 02:29:17.354862 kernel: audit: type=1130 audit(1719887357.335:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.332963 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 02:29:17.397561 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 02:29:17.397585 kernel: audit: type=1130 audit(1719887357.367:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.397596 kernel: device-mapper: uevent: version 1.0.3 Jul 2 02:29:17.397605 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 02:29:17.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.332992 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 02:29:17.335042 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 2 02:29:17.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.336283 systemd[1]: Started systemd-resolved.service. Jul 2 02:29:17.367555 systemd[1]: Finished kmod-static-nodes.service. Jul 2 02:29:17.437010 systemd-modules-load[276]: Inserted module 'dm_multipath' Jul 2 02:29:17.503373 kernel: audit: type=1130 audit(1719887357.441:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.503395 kernel: audit: type=1130 audit(1719887357.472:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.464018 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 02:29:17.472683 systemd[1]: Finished systemd-modules-load.service. Jul 2 02:29:17.555975 kernel: audit: type=1130 audit(1719887357.498:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.556005 kernel: audit: type=1130 audit(1719887357.508:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.498929 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 02:29:17.508322 systemd[1]: Reached target nss-lookup.target. Jul 2 02:29:17.561581 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 02:29:17.566907 systemd[1]: Starting systemd-sysctl.service... Jul 2 02:29:17.584832 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 02:29:17.594673 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 02:29:17.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.603449 systemd[1]: Finished systemd-sysctl.service. Jul 2 02:29:17.628275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 02:29:17.660299 kernel: audit: type=1130 audit(1719887357.602:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.660321 kernel: audit: type=1130 audit(1719887357.627:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.656179 systemd[1]: Starting dracut-cmdline.service... Jul 2 02:29:17.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.682104 kernel: audit: type=1130 audit(1719887357.655:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.688153 dracut-cmdline[298]: dracut-dracut-053 Jul 2 02:29:17.692698 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 02:29:17.789170 kernel: Loading iSCSI transport class v2.0-870. Jul 2 02:29:17.805160 kernel: iscsi: registered transport (tcp) Jul 2 02:29:17.825278 kernel: iscsi: registered transport (qla4xxx) Jul 2 02:29:17.825313 kernel: QLogic iSCSI HBA Driver Jul 2 02:29:17.854993 systemd[1]: Finished dracut-cmdline.service. Jul 2 02:29:17.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:17.860758 systemd[1]: Starting dracut-pre-udev.service... Jul 2 02:29:17.912160 kernel: raid6: neonx8 gen() 13825 MB/s Jul 2 02:29:17.932154 kernel: raid6: neonx8 xor() 10842 MB/s Jul 2 02:29:17.952145 kernel: raid6: neonx4 gen() 13542 MB/s Jul 2 02:29:17.973145 kernel: raid6: neonx4 xor() 11318 MB/s Jul 2 02:29:17.994144 kernel: raid6: neonx2 gen() 12969 MB/s Jul 2 02:29:18.014144 kernel: raid6: neonx2 xor() 10375 MB/s Jul 2 02:29:18.036146 kernel: raid6: neonx1 gen() 10516 MB/s Jul 2 02:29:18.056149 kernel: raid6: neonx1 xor() 8790 MB/s Jul 2 02:29:18.076143 kernel: raid6: int64x8 gen() 6270 MB/s Jul 2 02:29:18.097148 kernel: raid6: int64x8 xor() 3543 MB/s Jul 2 02:29:18.118148 kernel: raid6: int64x4 gen() 7250 MB/s Jul 2 02:29:18.138145 kernel: raid6: int64x4 xor() 3855 MB/s Jul 2 02:29:18.159146 kernel: raid6: int64x2 gen() 6152 MB/s Jul 2 02:29:18.179148 kernel: raid6: int64x2 xor() 3320 MB/s Jul 2 02:29:18.200144 kernel: raid6: int64x1 gen() 5046 MB/s Jul 2 02:29:18.225858 kernel: raid6: int64x1 xor() 2646 MB/s Jul 2 02:29:18.225870 kernel: raid6: using algorithm neonx8 gen() 13825 MB/s Jul 2 02:29:18.225878 kernel: raid6: .... xor() 10842 MB/s, rmw enabled Jul 2 02:29:18.230252 kernel: raid6: using neon recovery algorithm Jul 2 02:29:18.251489 kernel: xor: measuring software checksum speed Jul 2 02:29:18.251501 kernel: 8regs : 17304 MB/sec Jul 2 02:29:18.255730 kernel: 32regs : 20749 MB/sec Jul 2 02:29:18.259740 kernel: arm64_neon : 28102 MB/sec Jul 2 02:29:18.259753 kernel: xor: using function: arm64_neon (28102 MB/sec) Jul 2 02:29:18.320153 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 02:29:18.329883 systemd[1]: Finished dracut-pre-udev.service. Jul 2 02:29:18.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:18.338000 audit: BPF prog-id=7 op=LOAD Jul 2 02:29:18.338000 audit: BPF prog-id=8 op=LOAD Jul 2 02:29:18.339165 systemd[1]: Starting systemd-udevd.service... Jul 2 02:29:18.355684 systemd-udevd[474]: Using default interface naming scheme 'v252'. Jul 2 02:29:18.362358 systemd[1]: Started systemd-udevd.service. Jul 2 02:29:18.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:18.372409 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 02:29:18.387343 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Jul 2 02:29:18.423010 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 02:29:18.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:18.428615 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 02:29:18.463287 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 02:29:18.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:18.523155 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 02:29:18.532156 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 02:29:18.563677 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 02:29:18.563740 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 02:29:18.563750 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 02:29:18.581888 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 02:29:18.581941 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 02:29:18.589157 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 02:29:18.596562 kernel: scsi host0: storvsc_host_t Jul 2 02:29:18.596761 kernel: scsi host1: storvsc_host_t Jul 2 02:29:18.596787 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 02:29:18.614161 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 02:29:18.632451 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 02:29:18.632671 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 02:29:18.643810 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 02:29:18.644013 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 02:29:18.648823 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 02:29:18.649002 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 02:29:18.656170 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 02:29:18.656344 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 02:29:18.667720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 02:29:18.667774 kernel: hv_netvsc 000d3a6e-d806-000d-3a6e-d806000d3a6e eth0: VF slot 1 added Jul 2 02:29:18.677161 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 02:29:18.691088 kernel: hv_vmbus: registering driver hv_pci Jul 2 02:29:18.691157 kernel: hv_pci 9738b32c-22b9-4a0f-b2fc-4d829b4bae99: PCI VMBus probing: Using version 0x10004 Jul 2 02:29:18.710846 kernel: hv_pci 9738b32c-22b9-4a0f-b2fc-4d829b4bae99: PCI host bridge to bus 22b9:00 Jul 2 02:29:18.711034 kernel: pci_bus 22b9:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 02:29:18.711153 kernel: pci_bus 22b9:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 02:29:18.726270 kernel: pci 22b9:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 02:29:18.739271 kernel: pci 22b9:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 02:29:18.760188 kernel: pci 22b9:00:02.0: enabling Extended Tags Jul 2 02:29:18.778255 kernel: pci 22b9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 22b9:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 02:29:18.791147 kernel: pci_bus 22b9:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 02:29:18.791312 kernel: pci 22b9:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 02:29:18.831167 kernel: mlx5_core 22b9:00:02.0: firmware version: 16.30.1284 Jul 2 02:29:18.985152 kernel: mlx5_core 22b9:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jul 2 02:29:19.041616 kernel: hv_netvsc 000d3a6e-d806-000d-3a6e-d806000d3a6e eth0: VF registering: eth1 Jul 2 02:29:19.041806 kernel: mlx5_core 22b9:00:02.0 eth1: joined to eth0 Jul 2 02:29:19.054162 kernel: mlx5_core 22b9:00:02.0 enP8889s1: renamed from eth1 Jul 2 02:29:19.078127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 02:29:19.171222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (528) Jul 2 02:29:19.182336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 02:29:19.260546 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 02:29:19.315765 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 02:29:19.321832 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 02:29:19.334159 systemd[1]: Starting disk-uuid.service... Jul 2 02:29:19.360157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 02:29:19.368150 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 02:29:20.381154 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 02:29:20.381551 disk-uuid[597]: The operation has completed successfully. Jul 2 02:29:20.434772 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 02:29:20.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.434858 systemd[1]: Finished disk-uuid.service. Jul 2 02:29:20.448755 systemd[1]: Starting verity-setup.service... Jul 2 02:29:20.487161 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 02:29:20.744889 systemd[1]: Found device dev-mapper-usr.device. Jul 2 02:29:20.750448 systemd[1]: Finished verity-setup.service. Jul 2 02:29:20.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.760355 systemd[1]: Mounting sysusr-usr.mount... Jul 2 02:29:20.822165 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 02:29:20.822409 systemd[1]: Mounted sysusr-usr.mount. Jul 2 02:29:20.826755 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 02:29:20.827576 systemd[1]: Starting ignition-setup.service... Jul 2 02:29:20.835363 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 02:29:20.877060 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 02:29:20.877114 kernel: BTRFS info (device sda6): using free space tree Jul 2 02:29:20.881800 kernel: BTRFS info (device sda6): has skinny extents Jul 2 02:29:20.917053 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 02:29:20.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.927000 audit: BPF prog-id=9 op=LOAD Jul 2 02:29:20.927850 systemd[1]: Starting systemd-networkd.service... Jul 2 02:29:20.957459 systemd-networkd[835]: lo: Link UP Jul 2 02:29:20.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.957470 systemd-networkd[835]: lo: Gained carrier Jul 2 02:29:20.958201 systemd-networkd[835]: Enumeration completed Jul 2 02:29:20.961171 systemd[1]: Started systemd-networkd.service. Jul 2 02:29:20.961639 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 02:29:21.026629 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 2 02:29:21.026657 kernel: audit: type=1130 audit(1719887360.993:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.966204 systemd[1]: Reached target network.target. Jul 2 02:29:21.054704 kernel: audit: type=1130 audit(1719887361.026:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:21.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:21.055241 iscsid[847]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 02:29:21.055241 iscsid[847]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 02:29:21.055241 iscsid[847]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 02:29:21.055241 iscsid[847]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 02:29:21.055241 iscsid[847]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 02:29:21.055241 iscsid[847]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 02:29:21.055241 iscsid[847]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 02:29:21.187616 kernel: mlx5_core 22b9:00:02.0 enP8889s1: Link up Jul 2 02:29:21.188409 kernel: audit: type=1130 audit(1719887361.095:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:21.188431 kernel: hv_netvsc 000d3a6e-d806-000d-3a6e-d806000d3a6e eth0: Data path switched to VF: enP8889s1 Jul 2 02:29:21.188522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 02:29:21.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.971522 systemd[1]: Starting iscsiuio.service... Jul 2 02:29:20.984252 systemd[1]: Started iscsiuio.service. Jul 2 02:29:20.993868 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 02:29:21.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:20.994981 systemd[1]: Starting iscsid.service... Jul 2 02:29:21.233349 kernel: audit: type=1130 audit(1719887361.206:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:21.023063 systemd[1]: Started iscsid.service. Jul 2 02:29:21.041991 systemd[1]: Starting dracut-initqueue.service... Jul 2 02:29:21.084188 systemd[1]: Finished dracut-initqueue.service. Jul 2 02:29:21.097110 systemd[1]: Reached target remote-fs-pre.target. Jul 2 02:29:21.144650 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 02:29:21.150014 systemd-networkd[835]: enP8889s1: Link UP Jul 2 02:29:21.150095 systemd-networkd[835]: eth0: Link UP Jul 2 02:29:21.150243 systemd-networkd[835]: eth0: Gained carrier Jul 2 02:29:21.155409 systemd[1]: Reached target remote-fs.target. Jul 2 02:29:21.178482 systemd[1]: Starting dracut-pre-mount.service... Jul 2 02:29:21.186880 systemd-networkd[835]: enP8889s1: Gained carrier Jul 2 02:29:21.197270 systemd-networkd[835]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 02:29:21.198789 systemd[1]: Finished dracut-pre-mount.service. Jul 2 02:29:21.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:21.297992 systemd[1]: Finished ignition-setup.service. Jul 2 02:29:21.324945 kernel: audit: type=1130 audit(1719887361.302:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:21.325297 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 02:29:22.330234 systemd-networkd[835]: eth0: Gained IPv6LL Jul 2 02:29:25.829261 ignition[862]: Ignition 2.14.0 Jul 2 02:29:25.832575 ignition[862]: Stage: fetch-offline Jul 2 02:29:25.832654 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:25.832675 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:25.875350 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:25.908867 ignition[862]: parsed url from cmdline: "" Jul 2 02:29:25.908879 ignition[862]: no config URL provided Jul 2 02:29:25.908893 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 02:29:25.908905 ignition[862]: no config at "/usr/lib/ignition/user.ign" Jul 2 02:29:25.926001 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 02:29:25.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:25.908911 ignition[862]: failed to fetch config: resource requires networking Jul 2 02:29:25.966124 kernel: audit: type=1130 audit(1719887365.935:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:25.936349 systemd[1]: Starting ignition-fetch.service... Jul 2 02:29:25.912785 ignition[862]: Ignition finished successfully Jul 2 02:29:25.949158 ignition[868]: Ignition 2.14.0 Jul 2 02:29:25.949164 ignition[868]: Stage: fetch Jul 2 02:29:25.949282 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:25.949310 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:25.961115 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:25.961262 ignition[868]: parsed url from cmdline: "" Jul 2 02:29:25.961266 ignition[868]: no config URL provided Jul 2 02:29:25.961271 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 02:29:25.961280 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jul 2 02:29:25.961317 ignition[868]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 02:29:26.054210 ignition[868]: GET result: OK Jul 2 02:29:26.054249 ignition[868]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Jul 2 02:29:26.174383 ignition[868]: opening config device: "/dev/sr0" Jul 2 02:29:26.174833 ignition[868]: getting drive status for "/dev/sr0" Jul 2 02:29:26.174873 ignition[868]: drive status: OK Jul 2 02:29:26.174915 ignition[868]: mounting config device Jul 2 02:29:26.174933 ignition[868]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure2113723851" Jul 2 02:29:26.203158 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/07/03 00:00 (1000) Jul 2 02:29:26.203920 ignition[868]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure2113723851" Jul 2 02:29:26.203929 ignition[868]: checking for config drive Jul 2 02:29:26.205037 systemd[1]: tmp-ignition\x2dazure2113723851.mount: Deactivated successfully. Jul 2 02:29:26.204354 ignition[868]: reading config Jul 2 02:29:26.215050 unknown[868]: fetched base config from "system" Jul 2 02:29:26.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.204766 ignition[868]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure2113723851" Jul 2 02:29:26.255700 kernel: audit: type=1130 audit(1719887366.223:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.215058 unknown[868]: fetched base config from "system" Jul 2 02:29:26.211571 ignition[868]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure2113723851" Jul 2 02:29:26.215063 unknown[868]: fetched user config from "azure" Jul 2 02:29:26.292290 kernel: audit: type=1130 audit(1719887366.270:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.211589 ignition[868]: config has been read from custom data Jul 2 02:29:26.219118 systemd[1]: Finished ignition-fetch.service. Jul 2 02:29:26.211651 ignition[868]: parsing config with SHA512: 19a801a2735dac33f354868dc141bc45d2d8c362771b690e9471e6615fe44191aa1aa136b04ecec699702d340a1c1921984d345601bb3fca2aa79ad17b82fc73 Jul 2 02:29:26.244901 systemd[1]: Starting ignition-kargs.service... Jul 2 02:29:26.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.215616 ignition[868]: fetch: fetch complete Jul 2 02:29:26.336921 kernel: audit: type=1130 audit(1719887366.307:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.266840 systemd[1]: Finished ignition-kargs.service. Jul 2 02:29:26.215621 ignition[868]: fetch: fetch passed Jul 2 02:29:26.272005 systemd[1]: Starting ignition-disks.service... Jul 2 02:29:26.215664 ignition[868]: Ignition finished successfully Jul 2 02:29:26.303845 systemd[1]: Finished ignition-disks.service. Jul 2 02:29:26.255827 ignition[877]: Ignition 2.14.0 Jul 2 02:29:26.308304 systemd[1]: Reached target initrd-root-device.target. Jul 2 02:29:26.255834 ignition[877]: Stage: kargs Jul 2 02:29:26.333507 systemd[1]: Reached target local-fs-pre.target. Jul 2 02:29:26.255934 ignition[877]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:26.341187 systemd[1]: Reached target local-fs.target. Jul 2 02:29:26.255952 ignition[877]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:26.348938 systemd[1]: Reached target sysinit.target. Jul 2 02:29:26.258397 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:26.356526 systemd[1]: Reached target basic.target. Jul 2 02:29:26.260755 ignition[877]: kargs: kargs passed Jul 2 02:29:26.365305 systemd[1]: Starting systemd-fsck-root.service... Jul 2 02:29:26.260802 ignition[877]: Ignition finished successfully Jul 2 02:29:26.281063 ignition[883]: Ignition 2.14.0 Jul 2 02:29:26.281069 ignition[883]: Stage: disks Jul 2 02:29:26.281189 ignition[883]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:26.281206 ignition[883]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:26.283736 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:26.302925 ignition[883]: disks: disks passed Jul 2 02:29:26.302983 ignition[883]: Ignition finished successfully Jul 2 02:29:26.460948 systemd-fsck[891]: ROOT: clean, 614/7326000 files, 481075/7359488 blocks Jul 2 02:29:26.469932 systemd[1]: Finished systemd-fsck-root.service. Jul 2 02:29:26.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.478615 systemd[1]: Mounting sysroot.mount... Jul 2 02:29:26.502643 kernel: audit: type=1130 audit(1719887366.474:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:26.518190 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 02:29:26.518244 systemd[1]: Mounted sysroot.mount. Jul 2 02:29:26.522044 systemd[1]: Reached target initrd-root-fs.target. Jul 2 02:29:26.553125 systemd[1]: Mounting sysroot-usr.mount... Jul 2 02:29:26.557533 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 02:29:26.564508 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 02:29:26.564540 systemd[1]: Reached target ignition-diskful.target. Jul 2 02:29:26.574744 systemd[1]: Mounted sysroot-usr.mount. Jul 2 02:29:26.645708 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 02:29:26.650842 systemd[1]: Starting initrd-setup-root.service... Jul 2 02:29:26.673174 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (901) Jul 2 02:29:26.685161 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 02:29:26.685217 kernel: BTRFS info (device sda6): using free space tree Jul 2 02:29:26.685237 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 02:29:26.696999 kernel: BTRFS info (device sda6): has skinny extents Jul 2 02:29:26.700830 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 02:29:26.796893 initrd-setup-root[932]: cut: /sysroot/etc/group: No such file or directory Jul 2 02:29:26.846599 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 02:29:26.855609 initrd-setup-root[948]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 02:29:27.460514 systemd[1]: Finished initrd-setup-root.service. Jul 2 02:29:27.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:27.484912 systemd[1]: Starting ignition-mount.service... Jul 2 02:29:27.496944 kernel: audit: type=1130 audit(1719887367.464:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:27.491407 systemd[1]: Starting sysroot-boot.service... Jul 2 02:29:27.505117 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 02:29:27.505233 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 02:29:27.524164 systemd[1]: Finished sysroot-boot.service. Jul 2 02:29:27.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:27.548671 ignition[969]: INFO : Ignition 2.14.0 Jul 2 02:29:27.548671 ignition[969]: INFO : Stage: mount Jul 2 02:29:27.548671 ignition[969]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:27.548671 ignition[969]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:27.548671 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:27.603011 kernel: audit: type=1130 audit(1719887367.528:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:27.603037 kernel: audit: type=1130 audit(1719887367.562:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:27.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:27.558515 systemd[1]: Finished ignition-mount.service. Jul 2 02:29:27.606930 ignition[969]: INFO : mount: mount passed Jul 2 02:29:27.606930 ignition[969]: INFO : Ignition finished successfully Jul 2 02:29:28.750004 coreos-metadata[900]: Jul 02 02:29:28.749 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 02:29:28.788331 coreos-metadata[900]: Jul 02 02:29:28.788 INFO Fetch successful Jul 2 02:29:28.826766 coreos-metadata[900]: Jul 02 02:29:28.826 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 02:29:28.838746 coreos-metadata[900]: Jul 02 02:29:28.838 INFO Fetch successful Jul 2 02:29:28.852026 coreos-metadata[900]: Jul 02 02:29:28.851 INFO wrote hostname ci-3510.3.5-a-c92d6bc2c6 to /sysroot/etc/hostname Jul 2 02:29:28.854122 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 02:29:28.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:28.887316 systemd[1]: Starting ignition-files.service... Jul 2 02:29:28.897577 kernel: audit: type=1130 audit(1719887368.865:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:28.898260 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 02:29:28.916148 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (979) Jul 2 02:29:28.928271 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 02:29:28.928287 kernel: BTRFS info (device sda6): using free space tree Jul 2 02:29:28.928297 kernel: BTRFS info (device sda6): has skinny extents Jul 2 02:29:28.938306 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 02:29:28.951461 ignition[998]: INFO : Ignition 2.14.0 Jul 2 02:29:28.951461 ignition[998]: INFO : Stage: files Jul 2 02:29:28.961186 ignition[998]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:28.961186 ignition[998]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:28.961186 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:28.961186 ignition[998]: DEBUG : files: compiled without relabeling support, skipping Jul 2 02:29:28.961186 ignition[998]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 02:29:28.961186 ignition[998]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 02:29:29.114732 ignition[998]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 02:29:29.122799 ignition[998]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 02:29:29.135129 unknown[998]: wrote ssh authorized keys file for user: core Jul 2 02:29:29.141077 ignition[998]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 02:29:29.141077 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 02:29:29.141077 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 02:29:29.380672 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 02:29:29.622357 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 02:29:29.633682 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 02:29:29.633682 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 02:29:30.132576 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 02:29:30.198352 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 02:29:30.208846 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 02:29:30.352835 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1001) Jul 2 02:29:30.255867 systemd[1]: mnt-oem1334633857.mount: Deactivated successfully. Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1334633857" Jul 2 02:29:30.358377 ignition[998]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1334633857": device or resource busy Jul 2 02:29:30.358377 ignition[998]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1334633857", trying btrfs: device or resource busy Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1334633857" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1334633857" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1334633857" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1334633857" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 02:29:30.358377 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem466182807" Jul 2 02:29:30.358377 ignition[998]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem466182807": device or resource busy Jul 2 02:29:30.508548 ignition[998]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem466182807", trying btrfs: device or resource busy Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem466182807" Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem466182807" Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem466182807" Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem466182807" Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 02:29:30.508548 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 02:29:30.679828 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Jul 2 02:29:30.875457 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 02:29:30.875457 ignition[998]: INFO : files: op(14): [started] processing unit "waagent.service" Jul 2 02:29:30.875457 ignition[998]: INFO : files: op(14): [finished] processing unit "waagent.service" Jul 2 02:29:30.875457 ignition[998]: INFO : files: op(15): [started] processing unit "nvidia.service" Jul 2 02:29:30.875457 ignition[998]: INFO : files: op(15): [finished] processing unit "nvidia.service" Jul 2 02:29:30.875457 ignition[998]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Jul 2 02:29:30.953529 kernel: audit: type=1130 audit(1719887370.899:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:30.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Jul 2 02:29:30.954394 ignition[998]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 02:29:30.954394 ignition[998]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 02:29:30.954394 ignition[998]: INFO : files: files passed Jul 2 02:29:30.954394 ignition[998]: INFO : Ignition finished successfully Jul 2 02:29:31.107197 kernel: audit: type=1130 audit(1719887370.958:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:30.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:30.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:30.890598 systemd[1]: Finished ignition-files.service. Jul 2 02:29:30.900643 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 02:29:31.117815 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 02:29:31.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:30.926620 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 02:29:30.934324 systemd[1]: Starting ignition-quench.service... Jul 2 02:29:30.946707 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 02:29:30.946809 systemd[1]: Finished ignition-quench.service. Jul 2 02:29:31.001273 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 02:29:31.014555 systemd[1]: Reached target ignition-complete.target. Jul 2 02:29:31.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.027006 systemd[1]: Starting initrd-parse-etc.service... Jul 2 02:29:31.048089 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 02:29:31.048218 systemd[1]: Finished initrd-parse-etc.service. Jul 2 02:29:31.058664 systemd[1]: Reached target initrd-fs.target. Jul 2 02:29:31.070021 systemd[1]: Reached target initrd.target. Jul 2 02:29:31.083219 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 02:29:31.084043 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 02:29:31.112655 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 02:29:31.122969 systemd[1]: Starting initrd-cleanup.service... Jul 2 02:29:31.148867 systemd[1]: Stopped target nss-lookup.target. Jul 2 02:29:31.156175 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 02:29:31.166114 systemd[1]: Stopped target timers.target. Jul 2 02:29:31.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.174127 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 02:29:31.316383 kernel: kauditd_printk_skb: 6 callbacks suppressed Jul 2 02:29:31.316405 kernel: audit: type=1131 audit(1719887371.285:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.174236 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 02:29:31.183173 systemd[1]: Stopped target initrd.target. Jul 2 02:29:31.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.191278 systemd[1]: Stopped target basic.target. Jul 2 02:29:31.199160 systemd[1]: Stopped target ignition-complete.target. Jul 2 02:29:31.414738 kernel: audit: type=1131 audit(1719887371.328:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.414767 kernel: audit: type=1131 audit(1719887371.357:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.414786 kernel: audit: type=1131 audit(1719887371.384:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.208919 systemd[1]: Stopped target ignition-diskful.target. Jul 2 02:29:31.217469 systemd[1]: Stopped target initrd-root-device.target. Jul 2 02:29:31.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.226052 systemd[1]: Stopped target remote-fs.target. Jul 2 02:29:31.446760 kernel: audit: type=1131 audit(1719887371.424:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.233917 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 02:29:31.244836 systemd[1]: Stopped target sysinit.target. Jul 2 02:29:31.252812 systemd[1]: Stopped target local-fs.target. Jul 2 02:29:31.260826 systemd[1]: Stopped target local-fs-pre.target. Jul 2 02:29:31.269910 systemd[1]: Stopped target swap.target. Jul 2 02:29:31.277472 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 02:29:31.277589 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 02:29:31.311326 systemd[1]: Stopped target cryptsetup.target. Jul 2 02:29:31.320469 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 02:29:31.490339 ignition[1036]: INFO : Ignition 2.14.0 Jul 2 02:29:31.490339 ignition[1036]: INFO : Stage: umount Jul 2 02:29:31.490339 ignition[1036]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 02:29:31.490339 ignition[1036]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 02:29:31.579247 kernel: audit: type=1131 audit(1719887371.521:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.579290 kernel: audit: type=1131 audit(1719887371.552:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.320570 systemd[1]: Stopped dracut-initqueue.service. Jul 2 02:29:31.583521 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 02:29:31.583521 ignition[1036]: INFO : umount: umount passed Jul 2 02:29:31.583521 ignition[1036]: INFO : Ignition finished successfully Jul 2 02:29:31.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.348983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 02:29:31.669126 kernel: audit: type=1131 audit(1719887371.588:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.669166 kernel: audit: type=1131 audit(1719887371.612:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.669184 kernel: audit: type=1131 audit(1719887371.621:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.349102 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 02:29:31.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.357775 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 02:29:31.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.357857 systemd[1]: Stopped ignition-files.service. Jul 2 02:29:31.384633 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 02:29:31.384784 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 02:29:31.475766 systemd[1]: Stopping ignition-mount.service... Jul 2 02:29:31.484901 systemd[1]: Stopping iscsiuio.service... Jul 2 02:29:31.498935 systemd[1]: Stopping sysroot-boot.service... Jul 2 02:29:31.505685 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 02:29:31.505901 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 02:29:31.521714 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 02:29:31.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.521839 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 02:29:31.554062 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 02:29:31.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.567011 systemd[1]: Stopped iscsiuio.service. Jul 2 02:29:31.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.589727 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 02:29:31.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.800000 audit: BPF prog-id=6 op=UNLOAD Jul 2 02:29:31.590245 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 02:29:31.590351 systemd[1]: Stopped ignition-mount.service. Jul 2 02:29:31.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.614411 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 02:29:31.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.614466 systemd[1]: Stopped ignition-disks.service. Jul 2 02:29:31.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.621323 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 02:29:31.621359 systemd[1]: Stopped ignition-kargs.service. Jul 2 02:29:31.648479 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 02:29:31.648520 systemd[1]: Stopped ignition-fetch.service. Jul 2 02:29:31.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.676205 systemd[1]: Stopped target network.target. Jul 2 02:29:31.680279 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 02:29:31.680326 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 02:29:31.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.691530 systemd[1]: Stopped target paths.target. Jul 2 02:29:31.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.700160 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 02:29:31.925818 kernel: hv_netvsc 000d3a6e-d806-000d-3a6e-d806000d3a6e eth0: Data path switched from VF: enP8889s1 Jul 2 02:29:31.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.714181 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 02:29:31.719269 systemd[1]: Stopped target slices.target. Jul 2 02:29:31.727521 systemd[1]: Stopped target sockets.target. Jul 2 02:29:31.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.735278 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 02:29:31.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.735311 systemd[1]: Closed iscsid.socket. Jul 2 02:29:31.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.743147 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 02:29:31.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.743183 systemd[1]: Closed iscsiuio.socket. Jul 2 02:29:31.751910 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 02:29:31.751953 systemd[1]: Stopped ignition-setup.service. Jul 2 02:29:31.760334 systemd[1]: Stopping systemd-networkd.service... Jul 2 02:29:31.769383 systemd[1]: Stopping systemd-resolved.service... Jul 2 02:29:31.773064 systemd-networkd[835]: eth0: DHCPv6 lease lost Jul 2 02:29:31.998000 audit: BPF prog-id=9 op=UNLOAD Jul 2 02:29:31.774401 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 02:29:31.774482 systemd[1]: Finished initrd-cleanup.service. Jul 2 02:29:31.783698 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 02:29:31.783791 systemd[1]: Stopped systemd-networkd.service. Jul 2 02:29:31.793155 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 02:29:32.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.793236 systemd[1]: Stopped systemd-resolved.service. Jul 2 02:29:31.801122 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 02:29:31.801168 systemd[1]: Closed systemd-networkd.socket. Jul 2 02:29:31.810246 systemd[1]: Stopping network-cleanup.service... Jul 2 02:29:31.818334 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 02:29:31.818388 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 02:29:31.823548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 02:29:31.823612 systemd[1]: Stopped systemd-sysctl.service. Jul 2 02:29:31.836960 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 02:29:32.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.837024 systemd[1]: Stopped systemd-modules-load.service. Jul 2 02:29:31.842037 systemd[1]: Stopping systemd-udevd.service... Jul 2 02:29:32.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:31.850250 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 02:29:31.858460 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 02:29:31.858603 systemd[1]: Stopped systemd-udevd.service. Jul 2 02:29:31.868596 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 02:29:31.868634 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 02:29:31.876756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 02:29:31.876800 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 02:29:31.884856 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 02:29:31.884908 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 02:29:31.894591 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 02:29:32.155110 systemd-journald[275]: Received SIGTERM from PID 1 (n/a). Jul 2 02:29:32.155162 iscsid[847]: iscsid shutting down. Jul 2 02:29:31.894634 systemd[1]: Stopped dracut-cmdline.service. Jul 2 02:29:31.911600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 02:29:31.911646 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 02:29:31.925161 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 02:29:31.936057 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 02:29:31.936151 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 02:29:31.949628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 02:29:31.949698 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 02:29:31.954317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 02:29:31.954356 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 02:29:31.964731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 02:29:31.965225 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 02:29:31.965305 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 02:29:32.018121 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 02:29:32.018258 systemd[1]: Stopped network-cleanup.service. Jul 2 02:29:32.062162 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 02:29:32.062263 systemd[1]: Stopped sysroot-boot.service. Jul 2 02:29:32.070104 systemd[1]: Reached target initrd-switch-root.target. Jul 2 02:29:32.079891 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 02:29:32.079946 systemd[1]: Stopped initrd-setup-root.service. Jul 2 02:29:32.089732 systemd[1]: Starting initrd-switch-root.service... Jul 2 02:29:32.107937 systemd[1]: Switching root. Jul 2 02:29:32.155790 systemd-journald[275]: Journal stopped Jul 2 02:29:48.983974 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 02:29:48.983997 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 02:29:48.984007 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 02:29:48.984017 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 02:29:48.984025 kernel: SELinux: policy capability open_perms=1 Jul 2 02:29:48.984033 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 02:29:48.984042 kernel: SELinux: policy capability always_check_network=0 Jul 2 02:29:48.984050 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 02:29:48.984057 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 02:29:48.984065 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 02:29:48.984074 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 02:29:48.984084 systemd[1]: Successfully loaded SELinux policy in 463.172ms. Jul 2 02:29:48.984094 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 66.651ms. Jul 2 02:29:48.984105 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 02:29:48.984116 systemd[1]: Detected virtualization microsoft. Jul 2 02:29:48.984125 systemd[1]: Detected architecture arm64. Jul 2 02:29:48.984144 systemd[1]: Detected first boot. Jul 2 02:29:48.984155 systemd[1]: Hostname set to . Jul 2 02:29:48.984164 systemd[1]: Initializing machine ID from random generator. Jul 2 02:29:48.984172 kernel: kauditd_printk_skb: 26 callbacks suppressed Jul 2 02:29:48.984182 kernel: audit: type=1400 audit(1719887376.992:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 02:29:48.984191 kernel: audit: type=1400 audit(1719887376.998:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 02:29:48.984203 kernel: audit: type=1334 audit(1719887377.017:84): prog-id=10 op=LOAD Jul 2 02:29:48.984212 kernel: audit: type=1334 audit(1719887377.017:85): prog-id=10 op=UNLOAD Jul 2 02:29:48.984220 kernel: audit: type=1334 audit(1719887377.035:86): prog-id=11 op=LOAD Jul 2 02:29:48.984228 kernel: audit: type=1334 audit(1719887377.035:87): prog-id=11 op=UNLOAD Jul 2 02:29:48.984237 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 02:29:48.984247 kernel: audit: type=1400 audit(1719887378.905:88): avc: denied { associate } for pid=1072 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 02:29:48.984258 kernel: audit: type=1300 audit(1719887378.905:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000145314 a1=40000c65e8 a2=40000ccac0 a3=32 items=0 ppid=1055 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 02:29:48.984267 kernel: audit: type=1327 audit(1719887378.905:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 02:29:48.984276 kernel: audit: type=1400 audit(1719887378.930:89): avc: denied { associate } for pid=1072 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 02:29:48.984285 systemd[1]: Populated /etc with preset unit settings. Jul 2 02:29:48.984295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 02:29:48.984304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 02:29:48.984315 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 02:29:48.984325 kernel: kauditd_printk_skb: 5 callbacks suppressed Jul 2 02:29:48.984333 kernel: audit: type=1334 audit(1719887388.218:90): prog-id=12 op=LOAD Jul 2 02:29:48.984341 kernel: audit: type=1334 audit(1719887388.218:91): prog-id=3 op=UNLOAD Jul 2 02:29:48.984350 kernel: audit: type=1334 audit(1719887388.223:92): prog-id=13 op=LOAD Jul 2 02:29:48.984358 kernel: audit: type=1334 audit(1719887388.229:93): prog-id=14 op=LOAD Jul 2 02:29:48.984369 kernel: audit: type=1334 audit(1719887388.229:94): prog-id=4 op=UNLOAD Jul 2 02:29:48.984378 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 02:29:48.984387 kernel: audit: type=1334 audit(1719887388.229:95): prog-id=5 op=UNLOAD Jul 2 02:29:48.984398 systemd[1]: Stopped iscsid.service. Jul 2 02:29:48.984407 kernel: audit: type=1334 audit(1719887388.235:96): prog-id=15 op=LOAD Jul 2 02:29:48.984416 kernel: audit: type=1334 audit(1719887388.235:97): prog-id=12 op=UNLOAD Jul 2 02:29:48.984424 kernel: audit: type=1334 audit(1719887388.241:98): prog-id=16 op=LOAD Jul 2 02:29:48.984433 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 02:29:48.984442 kernel: audit: type=1334 audit(1719887388.246:99): prog-id=17 op=LOAD Jul 2 02:29:48.984451 systemd[1]: Stopped initrd-switch-root.service. Jul 2 02:29:48.984460 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 02:29:48.984472 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 02:29:48.984482 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 02:29:48.984491 systemd[1]: Created slice system-getty.slice. Jul 2 02:29:48.984500 systemd[1]: Created slice system-modprobe.slice. Jul 2 02:29:48.984510 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 02:29:48.984519 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 02:29:48.984528 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 02:29:48.984538 systemd[1]: Created slice user.slice. Jul 2 02:29:48.984547 systemd[1]: Started systemd-ask-password-console.path. Jul 2 02:29:48.984557 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 02:29:48.984567 systemd[1]: Set up automount boot.automount. Jul 2 02:29:48.984576 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 02:29:48.984585 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 02:29:48.984595 systemd[1]: Stopped target initrd-fs.target. Jul 2 02:29:48.984604 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 02:29:48.984614 systemd[1]: Reached target integritysetup.target. Jul 2 02:29:48.984624 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 02:29:48.984633 systemd[1]: Reached target remote-fs.target. Jul 2 02:29:48.984643 systemd[1]: Reached target slices.target. Jul 2 02:29:48.984652 systemd[1]: Reached target swap.target. Jul 2 02:29:48.984661 systemd[1]: Reached target torcx.target. Jul 2 02:29:48.984671 systemd[1]: Reached target veritysetup.target. Jul 2 02:29:48.984680 systemd[1]: Listening on systemd-coredump.socket. Jul 2 02:29:48.984691 systemd[1]: Listening on systemd-initctl.socket. Jul 2 02:29:48.984701 systemd[1]: Listening on systemd-networkd.socket. Jul 2 02:29:48.984710 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 02:29:48.984720 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 02:29:48.984729 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 02:29:48.984738 systemd[1]: Mounting dev-hugepages.mount... Jul 2 02:29:48.984747 systemd[1]: Mounting dev-mqueue.mount... Jul 2 02:29:48.984758 systemd[1]: Mounting media.mount... Jul 2 02:29:48.984767 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 02:29:48.984776 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 02:29:48.984786 systemd[1]: Mounting tmp.mount... Jul 2 02:29:48.984796 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 02:29:48.984806 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 02:29:48.984815 systemd[1]: Starting kmod-static-nodes.service... Jul 2 02:29:48.984825 systemd[1]: Starting modprobe@configfs.service... Jul 2 02:29:48.984834 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 02:29:48.984844 systemd[1]: Starting modprobe@drm.service... Jul 2 02:29:48.984854 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 02:29:48.984863 systemd[1]: Starting modprobe@fuse.service... Jul 2 02:29:48.984873 systemd[1]: Starting modprobe@loop.service... Jul 2 02:29:48.984883 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 02:29:48.984892 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 02:29:48.984902 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 02:29:48.984911 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 02:29:48.984920 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 02:29:48.984931 systemd[1]: Stopped systemd-journald.service. Jul 2 02:29:48.984940 systemd[1]: systemd-journald.service: Consumed 3.087s CPU time. Jul 2 02:29:48.984950 systemd[1]: Starting systemd-journald.service... Jul 2 02:29:48.984959 kernel: loop: module loaded Jul 2 02:29:48.984968 systemd[1]: Starting systemd-modules-load.service... Jul 2 02:29:48.984977 systemd[1]: Starting systemd-network-generator.service... Jul 2 02:29:48.984986 kernel: fuse: init (API version 7.34) Jul 2 02:29:48.984996 systemd[1]: Starting systemd-remount-fs.service... Jul 2 02:29:48.985005 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 02:29:48.985016 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 02:29:48.985025 systemd[1]: Stopped verity-setup.service. Jul 2 02:29:48.985035 systemd[1]: Mounted dev-hugepages.mount. Jul 2 02:29:48.985044 systemd[1]: Mounted dev-mqueue.mount. Jul 2 02:29:48.985053 systemd[1]: Mounted media.mount. Jul 2 02:29:48.985062 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 02:29:48.985072 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 02:29:48.985081 systemd[1]: Mounted tmp.mount. Jul 2 02:29:48.985091 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 02:29:48.985101 systemd[1]: Finished kmod-static-nodes.service. Jul 2 02:29:48.985117 systemd-journald[1161]: Journal started Jul 2 02:29:48.985161 systemd-journald[1161]: Runtime Journal (/run/log/journal/4a6a9346e3fb483b881480de49938611) is 8.0M, max 78.6M, 70.6M free. Jul 2 02:29:35.931000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 02:29:36.992000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 02:29:36.998000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 02:29:37.017000 audit: BPF prog-id=10 op=LOAD Jul 2 02:29:37.017000 audit: BPF prog-id=10 op=UNLOAD Jul 2 02:29:37.035000 audit: BPF prog-id=11 op=LOAD Jul 2 02:29:37.035000 audit: BPF prog-id=11 op=UNLOAD Jul 2 02:29:38.905000 audit[1072]: AVC avc: denied { associate } for pid=1072 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 02:29:38.905000 audit[1072]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000145314 a1=40000c65e8 a2=40000ccac0 a3=32 items=0 ppid=1055 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 02:29:38.905000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 02:29:38.930000 audit[1072]: AVC avc: denied { associate } for pid=1072 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 02:29:38.930000 audit[1072]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001453f9 a2=1ed a3=0 items=2 ppid=1055 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 02:29:38.930000 audit: CWD cwd="/" Jul 2 02:29:38.930000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:38.930000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:38.930000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 02:29:48.218000 audit: BPF prog-id=12 op=LOAD Jul 2 02:29:48.218000 audit: BPF prog-id=3 op=UNLOAD Jul 2 02:29:48.223000 audit: BPF prog-id=13 op=LOAD Jul 2 02:29:48.229000 audit: BPF prog-id=14 op=LOAD Jul 2 02:29:48.229000 audit: BPF prog-id=4 op=UNLOAD Jul 2 02:29:48.229000 audit: BPF prog-id=5 op=UNLOAD Jul 2 02:29:48.235000 audit: BPF prog-id=15 op=LOAD Jul 2 02:29:48.235000 audit: BPF prog-id=12 op=UNLOAD Jul 2 02:29:48.241000 audit: BPF prog-id=16 op=LOAD Jul 2 02:29:48.246000 audit: BPF prog-id=17 op=LOAD Jul 2 02:29:48.247000 audit: BPF prog-id=13 op=UNLOAD Jul 2 02:29:48.247000 audit: BPF prog-id=14 op=UNLOAD Jul 2 02:29:48.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.280000 audit: BPF prog-id=15 op=UNLOAD Jul 2 02:29:48.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.815000 audit: BPF prog-id=18 op=LOAD Jul 2 02:29:48.815000 audit: BPF prog-id=19 op=LOAD Jul 2 02:29:48.815000 audit: BPF prog-id=20 op=LOAD Jul 2 02:29:48.815000 audit: BPF prog-id=16 op=UNLOAD Jul 2 02:29:48.815000 audit: BPF prog-id=17 op=UNLOAD Jul 2 02:29:48.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.981000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 02:29:48.981000 audit[1161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff65c6360 a2=4000 a3=1 items=0 ppid=1 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 02:29:48.981000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 02:29:48.216923 systemd[1]: Queued start job for default target multi-user.target. Jul 2 02:29:38.842409 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 02:29:48.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.248118 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 02:29:38.872455 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 02:29:48.248498 systemd[1]: systemd-journald.service: Consumed 3.087s CPU time. Jul 2 02:29:38.872476 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 02:29:38.872512 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 02:29:38.872523 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 02:29:38.872560 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 02:29:38.872571 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 02:29:38.872768 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 02:29:38.872799 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 02:29:38.872810 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 02:29:38.894761 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 02:29:38.894813 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 02:29:38.894841 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 02:29:38.894856 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 02:29:38.894876 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 02:29:38.894889 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 02:29:47.115602 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 02:29:47.115855 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 02:29:47.115959 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 02:29:47.116123 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 02:29:47.116187 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 02:29:47.116244 /usr/lib/systemd/system-generators/torcx-generator[1072]: time="2024-07-02T02:29:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 02:29:48.995313 systemd[1]: Started systemd-journald.service. Jul 2 02:29:48.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:48.995994 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 02:29:48.996121 systemd[1]: Finished modprobe@configfs.service. Jul 2 02:29:49.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.001555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 02:29:49.001685 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 02:29:49.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.006550 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 02:29:49.006675 systemd[1]: Finished modprobe@drm.service. Jul 2 02:29:49.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.011128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 02:29:49.011262 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 02:29:49.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.016184 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 02:29:49.016300 systemd[1]: Finished modprobe@fuse.service. Jul 2 02:29:49.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.020883 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 02:29:49.020994 systemd[1]: Finished modprobe@loop.service. Jul 2 02:29:49.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.025705 systemd[1]: Finished systemd-modules-load.service. Jul 2 02:29:49.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.030829 systemd[1]: Finished systemd-network-generator.service. Jul 2 02:29:49.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.036221 systemd[1]: Finished systemd-remount-fs.service. Jul 2 02:29:49.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.041238 systemd[1]: Reached target network-pre.target. Jul 2 02:29:49.046703 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 02:29:49.052012 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 02:29:49.055995 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 02:29:49.057652 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 02:29:49.062715 systemd[1]: Starting systemd-journal-flush.service... Jul 2 02:29:49.067168 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 02:29:49.068162 systemd[1]: Starting systemd-random-seed.service... Jul 2 02:29:49.072760 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 02:29:49.073872 systemd[1]: Starting systemd-sysctl.service... Jul 2 02:29:49.079349 systemd[1]: Starting systemd-sysusers.service... Jul 2 02:29:49.085751 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 02:29:49.090855 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 02:29:49.107616 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 02:29:49.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.113615 systemd[1]: Starting systemd-udev-settle.service... Jul 2 02:29:49.120675 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 02:29:49.173722 systemd-journald[1161]: Time spent on flushing to /var/log/journal/4a6a9346e3fb483b881480de49938611 is 15.002ms for 1127 entries. Jul 2 02:29:49.173722 systemd-journald[1161]: System Journal (/var/log/journal/4a6a9346e3fb483b881480de49938611) is 8.0M, max 2.6G, 2.6G free. Jul 2 02:29:49.235218 systemd-journald[1161]: Received client request to flush runtime journal. Jul 2 02:29:49.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.193459 systemd[1]: Finished systemd-random-seed.service. Jul 2 02:29:49.198365 systemd[1]: Reached target first-boot-complete.target. Jul 2 02:29:49.236442 systemd[1]: Finished systemd-journal-flush.service. Jul 2 02:29:49.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:49.244611 systemd[1]: Finished systemd-sysctl.service. Jul 2 02:29:49.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:50.093532 systemd[1]: Finished systemd-sysusers.service. Jul 2 02:29:50.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:50.099457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 02:29:50.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:50.521212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 02:29:50.566772 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 02:29:50.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:50.572000 audit: BPF prog-id=21 op=LOAD Jul 2 02:29:50.572000 audit: BPF prog-id=22 op=LOAD Jul 2 02:29:50.572000 audit: BPF prog-id=7 op=UNLOAD Jul 2 02:29:50.572000 audit: BPF prog-id=8 op=UNLOAD Jul 2 02:29:50.573341 systemd[1]: Starting systemd-udevd.service... Jul 2 02:29:50.591165 systemd-udevd[1197]: Using default interface naming scheme 'v252'. Jul 2 02:29:51.006861 systemd[1]: Started systemd-udevd.service. Jul 2 02:29:51.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:51.016000 audit: BPF prog-id=23 op=LOAD Jul 2 02:29:51.017805 systemd[1]: Starting systemd-networkd.service... Jul 2 02:29:51.043255 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 2 02:29:51.130192 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 02:29:51.131215 systemd[1]: Starting systemd-userdbd.service... Jul 2 02:29:51.130000 audit: BPF prog-id=24 op=LOAD Jul 2 02:29:51.130000 audit: BPF prog-id=25 op=LOAD Jul 2 02:29:51.130000 audit: BPF prog-id=26 op=LOAD Jul 2 02:29:51.135000 audit[1202]: AVC avc: denied { confidentiality } for pid=1202 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 02:29:51.146171 kernel: hv_vmbus: registering driver hv_balloon Jul 2 02:29:51.146246 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 02:29:51.146263 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 2 02:29:51.135000 audit[1202]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaae383a340 a1=aa2c a2=ffffb25124b0 a3=aaaae379a010 items=12 ppid=1197 pid=1202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 02:29:51.135000 audit: CWD cwd="/" Jul 2 02:29:51.135000 audit: PATH item=0 name=(null) inode=7241 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=1 name=(null) inode=11424 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=2 name=(null) inode=11424 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=3 name=(null) inode=11425 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=4 name=(null) inode=11424 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=5 name=(null) inode=11426 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=6 name=(null) inode=11424 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=7 name=(null) inode=11427 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=8 name=(null) inode=11424 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=9 name=(null) inode=11428 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=10 name=(null) inode=11424 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PATH item=11 name=(null) inode=11429 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 02:29:51.135000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 02:29:51.173608 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 02:29:51.173706 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 02:29:51.180682 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 02:29:51.181153 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 02:29:51.188893 kernel: hv_vmbus: registering driver hv_utils Jul 2 02:29:51.193118 kernel: Console: switching to colour dummy device 80x25 Jul 2 02:29:51.200310 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 02:29:51.200365 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 02:29:51.200381 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 02:29:51.117328 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 02:29:51.152342 systemd-journald[1161]: Time jumped backwards, rotating. Jul 2 02:29:51.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:51.133657 systemd[1]: Started systemd-userdbd.service. Jul 2 02:29:51.603347 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1218) Jul 2 02:29:51.621858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 02:29:51.627483 systemd[1]: Finished systemd-udev-settle.service. Jul 2 02:29:51.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:51.633104 systemd[1]: Starting lvm2-activation-early.service... Jul 2 02:29:51.695185 systemd-networkd[1217]: lo: Link UP Jul 2 02:29:51.695456 systemd-networkd[1217]: lo: Gained carrier Jul 2 02:29:51.695957 systemd-networkd[1217]: Enumeration completed Jul 2 02:29:51.696148 systemd[1]: Started systemd-networkd.service. Jul 2 02:29:51.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:51.701918 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 02:29:51.765386 systemd-networkd[1217]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 02:29:51.814326 kernel: mlx5_core 22b9:00:02.0 enP8889s1: Link up Jul 2 02:29:51.841105 systemd-networkd[1217]: enP8889s1: Link UP Jul 2 02:29:51.841331 kernel: hv_netvsc 000d3a6e-d806-000d-3a6e-d806000d3a6e eth0: Data path switched to VF: enP8889s1 Jul 2 02:29:51.841524 systemd-networkd[1217]: eth0: Link UP Jul 2 02:29:51.841596 systemd-networkd[1217]: eth0: Gained carrier Jul 2 02:29:51.845555 systemd-networkd[1217]: enP8889s1: Gained carrier Jul 2 02:29:51.859436 systemd-networkd[1217]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 02:29:52.191461 lvm[1274]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 02:29:52.265148 systemd[1]: Finished lvm2-activation-early.service. Jul 2 02:29:52.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:52.270244 systemd[1]: Reached target cryptsetup.target. Jul 2 02:29:52.276264 systemd[1]: Starting lvm2-activation.service... Jul 2 02:29:52.280489 lvm[1276]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 02:29:52.300545 systemd[1]: Finished lvm2-activation.service. Jul 2 02:29:52.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:52.305851 systemd[1]: Reached target local-fs-pre.target. Jul 2 02:29:52.311120 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 02:29:52.311181 systemd[1]: Reached target local-fs.target. Jul 2 02:29:52.315699 systemd[1]: Reached target machines.target. Jul 2 02:29:52.321883 systemd[1]: Starting ldconfig.service... Jul 2 02:29:52.352542 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 02:29:52.352602 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:29:52.353744 systemd[1]: Starting systemd-boot-update.service... Jul 2 02:29:52.359349 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 02:29:52.365683 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 02:29:52.372513 systemd[1]: Starting systemd-sysext.service... Jul 2 02:29:52.442835 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1278 (bootctl) Jul 2 02:29:52.444153 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 02:29:52.868434 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 02:29:52.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:52.886687 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 02:29:52.887361 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 02:29:52.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:52.904817 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 02:29:53.010680 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 02:29:53.010865 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 02:29:53.077339 kernel: loop0: detected capacity change from 0 to 194512 Jul 2 02:29:53.090465 systemd-networkd[1217]: eth0: Gained IPv6LL Jul 2 02:29:53.096157 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 02:29:53.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.129335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 02:29:53.139541 systemd-fsck[1285]: fsck.fat 4.2 (2021-01-31) Jul 2 02:29:53.139541 systemd-fsck[1285]: /dev/sda1: 236 files, 117047/258078 clusters Jul 2 02:29:53.141646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 02:29:53.152359 kernel: loop1: detected capacity change from 0 to 194512 Jul 2 02:29:53.157873 kernel: kauditd_printk_skb: 78 callbacks suppressed Jul 2 02:29:53.157949 kernel: audit: type=1130 audit(1719887393.151:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.156932 systemd[1]: Mounting boot.mount... Jul 2 02:29:53.183700 systemd[1]: Mounted boot.mount. Jul 2 02:29:53.192141 (sd-sysext)[1290]: Using extensions 'kubernetes'. Jul 2 02:29:53.192495 (sd-sysext)[1290]: Merged extensions into '/usr'. Jul 2 02:29:53.197020 systemd[1]: Finished systemd-boot-update.service. Jul 2 02:29:53.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.219347 kernel: audit: type=1130 audit(1719887393.200:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.229095 systemd[1]: Mounting usr-share-oem.mount... Jul 2 02:29:53.233620 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.235182 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 02:29:53.241220 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 02:29:53.247088 systemd[1]: Starting modprobe@loop.service... Jul 2 02:29:53.251400 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.251675 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:29:53.254268 systemd[1]: Mounted usr-share-oem.mount. Jul 2 02:29:53.259117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 02:29:53.259392 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 02:29:53.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.265121 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 02:29:53.265388 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 02:29:53.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.302055 kernel: audit: type=1130 audit(1719887393.263:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.302119 kernel: audit: type=1131 audit(1719887393.263:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.303206 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 02:29:53.303476 systemd[1]: Finished modprobe@loop.service. Jul 2 02:29:53.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.340070 kernel: audit: type=1130 audit(1719887393.301:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.340134 kernel: audit: type=1131 audit(1719887393.301:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.341195 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 02:29:53.341447 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.358696 systemd[1]: Finished systemd-sysext.service. Jul 2 02:29:53.374471 kernel: audit: type=1130 audit(1719887393.339:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.374564 kernel: audit: type=1131 audit(1719887393.339:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.380416 systemd[1]: Starting ensure-sysext.service... Jul 2 02:29:53.397449 kernel: audit: type=1130 audit(1719887393.377:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.401820 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 02:29:53.416067 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 02:29:53.416372 systemd[1]: Reloading. Jul 2 02:29:53.451331 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 02:29:53.458476 /usr/lib/systemd/system-generators/torcx-generator[1321]: time="2024-07-02T02:29:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 02:29:53.459391 /usr/lib/systemd/system-generators/torcx-generator[1321]: time="2024-07-02T02:29:53Z" level=info msg="torcx already run" Jul 2 02:29:53.493080 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 02:29:53.549521 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 02:29:53.549546 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 02:29:53.565184 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 02:29:53.626000 audit: BPF prog-id=27 op=LOAD Jul 2 02:29:53.626000 audit: BPF prog-id=18 op=UNLOAD Jul 2 02:29:53.633000 audit: BPF prog-id=28 op=LOAD Jul 2 02:29:53.634000 audit: BPF prog-id=29 op=LOAD Jul 2 02:29:53.634000 audit: BPF prog-id=19 op=UNLOAD Jul 2 02:29:53.634000 audit: BPF prog-id=20 op=UNLOAD Jul 2 02:29:53.636329 kernel: audit: type=1334 audit(1719887393.626:170): prog-id=27 op=LOAD Jul 2 02:29:53.635000 audit: BPF prog-id=30 op=LOAD Jul 2 02:29:53.635000 audit: BPF prog-id=23 op=UNLOAD Jul 2 02:29:53.636000 audit: BPF prog-id=31 op=LOAD Jul 2 02:29:53.636000 audit: BPF prog-id=32 op=LOAD Jul 2 02:29:53.636000 audit: BPF prog-id=21 op=UNLOAD Jul 2 02:29:53.636000 audit: BPF prog-id=22 op=UNLOAD Jul 2 02:29:53.637000 audit: BPF prog-id=33 op=LOAD Jul 2 02:29:53.637000 audit: BPF prog-id=24 op=UNLOAD Jul 2 02:29:53.637000 audit: BPF prog-id=34 op=LOAD Jul 2 02:29:53.637000 audit: BPF prog-id=35 op=LOAD Jul 2 02:29:53.637000 audit: BPF prog-id=25 op=UNLOAD Jul 2 02:29:53.637000 audit: BPF prog-id=26 op=UNLOAD Jul 2 02:29:53.650511 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.651946 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 02:29:53.657290 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 02:29:53.662701 systemd[1]: Starting modprobe@loop.service... Jul 2 02:29:53.666661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.666870 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:29:53.667764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 02:29:53.667994 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 02:29:53.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.672975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 02:29:53.673169 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 02:29:53.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.678724 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 02:29:53.678922 systemd[1]: Finished modprobe@loop.service. Jul 2 02:29:53.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.684775 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.686181 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 02:29:53.691883 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 02:29:53.697730 systemd[1]: Starting modprobe@loop.service... Jul 2 02:29:53.702072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.702355 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:29:53.703218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 02:29:53.703478 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 02:29:53.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.708812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 02:29:53.709015 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 02:29:53.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.714815 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 02:29:53.715012 systemd[1]: Finished modprobe@loop.service. Jul 2 02:29:53.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.721999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.723398 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 02:29:53.729296 systemd[1]: Starting modprobe@drm.service... Jul 2 02:29:53.734434 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 02:29:53.740127 systemd[1]: Starting modprobe@loop.service... Jul 2 02:29:53.744225 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.744457 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:29:53.745503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 02:29:53.745721 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 02:29:53.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.751365 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 02:29:53.751577 systemd[1]: Finished modprobe@drm.service. Jul 2 02:29:53.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.756507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 02:29:53.756713 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 02:29:53.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.764761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 02:29:53.764963 systemd[1]: Finished modprobe@loop.service. Jul 2 02:29:53.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:53.770346 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 02:29:53.770546 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 02:29:53.771831 systemd[1]: Finished ensure-sysext.service. Jul 2 02:29:53.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.056570 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 02:29:54.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.064048 systemd[1]: Starting audit-rules.service... Jul 2 02:29:54.069287 systemd[1]: Starting clean-ca-certificates.service... Jul 2 02:29:54.075360 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 02:29:54.080000 audit: BPF prog-id=36 op=LOAD Jul 2 02:29:54.082716 systemd[1]: Starting systemd-resolved.service... Jul 2 02:29:54.086000 audit: BPF prog-id=37 op=LOAD Jul 2 02:29:54.088512 systemd[1]: Starting systemd-timesyncd.service... Jul 2 02:29:54.094197 systemd[1]: Starting systemd-update-utmp.service... Jul 2 02:29:54.119000 audit[1396]: SYSTEM_BOOT pid=1396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.123631 systemd[1]: Finished systemd-update-utmp.service. Jul 2 02:29:54.217459 systemd[1]: Started systemd-timesyncd.service. Jul 2 02:29:54.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.222642 systemd[1]: Finished clean-ca-certificates.service. Jul 2 02:29:54.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.227509 systemd[1]: Reached target time-set.target. Jul 2 02:29:54.232153 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 02:29:54.345909 systemd-resolved[1394]: Positive Trust Anchors: Jul 2 02:29:54.345922 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 02:29:54.345947 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 02:29:54.349595 systemd-resolved[1394]: Using system hostname 'ci-3510.3.5-a-c92d6bc2c6'. Jul 2 02:29:54.350997 systemd[1]: Started systemd-resolved.service. Jul 2 02:29:54.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.356969 systemd[1]: Reached target network.target. Jul 2 02:29:54.361269 systemd[1]: Reached target network-online.target. Jul 2 02:29:54.365999 systemd[1]: Reached target nss-lookup.target. Jul 2 02:29:54.370826 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 02:29:54.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 02:29:54.477960 systemd-timesyncd[1395]: Contacted time server 69.10.223.132:123 (0.flatcar.pool.ntp.org). Jul 2 02:29:54.478320 systemd-timesyncd[1395]: Initial clock synchronization to Tue 2024-07-02 02:29:54.476932 UTC. Jul 2 02:29:54.715000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 02:29:54.715000 audit[1412]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff065f6e0 a2=420 a3=0 items=0 ppid=1391 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 02:29:54.715000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 02:29:54.726932 augenrules[1412]: No rules Jul 2 02:29:54.727796 systemd[1]: Finished audit-rules.service. Jul 2 02:30:03.813729 ldconfig[1277]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 02:30:03.829071 systemd[1]: Finished ldconfig.service. Jul 2 02:30:03.835010 systemd[1]: Starting systemd-update-done.service... Jul 2 02:30:03.858360 systemd[1]: Finished systemd-update-done.service. Jul 2 02:30:03.863460 systemd[1]: Reached target sysinit.target. Jul 2 02:30:03.867656 systemd[1]: Started motdgen.path. Jul 2 02:30:03.871222 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 02:30:03.877663 systemd[1]: Started logrotate.timer. Jul 2 02:30:03.881762 systemd[1]: Started mdadm.timer. Jul 2 02:30:03.885667 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 02:30:03.890623 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 02:30:03.890655 systemd[1]: Reached target paths.target. Jul 2 02:30:03.894989 systemd[1]: Reached target timers.target. Jul 2 02:30:03.900152 systemd[1]: Listening on dbus.socket. Jul 2 02:30:03.905399 systemd[1]: Starting docker.socket... Jul 2 02:30:03.921527 systemd[1]: Listening on sshd.socket. Jul 2 02:30:03.925793 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:30:03.926275 systemd[1]: Listening on docker.socket. Jul 2 02:30:03.930827 systemd[1]: Reached target sockets.target. Jul 2 02:30:03.935204 systemd[1]: Reached target basic.target. Jul 2 02:30:03.939560 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 02:30:03.939586 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 02:30:03.940761 systemd[1]: Starting containerd.service... Jul 2 02:30:03.946015 systemd[1]: Starting dbus.service... Jul 2 02:30:03.950486 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 02:30:03.956079 systemd[1]: Starting extend-filesystems.service... Jul 2 02:30:03.963294 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 02:30:03.974780 systemd[1]: Starting kubelet.service... Jul 2 02:30:03.979285 systemd[1]: Starting motdgen.service... Jul 2 02:30:03.983931 systemd[1]: Started nvidia.service. Jul 2 02:30:03.989093 systemd[1]: Starting prepare-helm.service... Jul 2 02:30:03.993846 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 02:30:03.999038 systemd[1]: Starting sshd-keygen.service... Jul 2 02:30:04.005373 systemd[1]: Starting systemd-logind.service... Jul 2 02:30:04.009577 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 02:30:04.009641 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 02:30:04.010033 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 02:30:04.010680 systemd[1]: Starting update-engine.service... Jul 2 02:30:04.015766 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 02:30:04.026051 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 02:30:04.026217 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 02:30:04.090373 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 02:30:04.090546 systemd[1]: Finished motdgen.service. Jul 2 02:30:04.097247 jq[1422]: false Jul 2 02:30:04.097907 jq[1440]: true Jul 2 02:30:04.111529 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 02:30:04.111688 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 02:30:04.132649 extend-filesystems[1423]: Found loop1 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda1 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda2 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda3 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found usr Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda4 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda6 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda7 Jul 2 02:30:04.137458 extend-filesystems[1423]: Found sda9 Jul 2 02:30:04.137458 extend-filesystems[1423]: Checking size of /dev/sda9 Jul 2 02:30:04.185137 jq[1449]: true Jul 2 02:30:04.188335 systemd-logind[1435]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 02:30:04.191799 systemd-logind[1435]: New seat seat0. Jul 2 02:30:04.222324 env[1446]: time="2024-07-02T02:30:04.220019536Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 02:30:04.242025 extend-filesystems[1423]: Old size kept for /dev/sda9 Jul 2 02:30:04.253667 extend-filesystems[1423]: Found sr0 Jul 2 02:30:04.247081 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 02:30:04.247251 systemd[1]: Finished extend-filesystems.service. Jul 2 02:30:04.264601 env[1446]: time="2024-07-02T02:30:04.264439305Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 02:30:04.265533 env[1446]: time="2024-07-02T02:30:04.265500351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.266801589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.266835348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.267041461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.267057780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.267079460Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.267089779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.267160617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267428 env[1446]: time="2024-07-02T02:30:04.267396450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267618 env[1446]: time="2024-07-02T02:30:04.267554005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 02:30:04.267618 env[1446]: time="2024-07-02T02:30:04.267571764Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 02:30:04.267656 env[1446]: time="2024-07-02T02:30:04.267630242Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 02:30:04.267656 env[1446]: time="2024-07-02T02:30:04.267642362Z" level=info msg="metadata content store policy set" policy=shared Jul 2 02:30:04.279849 tar[1443]: linux-arm64/helm Jul 2 02:30:04.282402 env[1446]: time="2024-07-02T02:30:04.282361407Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 02:30:04.282481 env[1446]: time="2024-07-02T02:30:04.282409566Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 02:30:04.282481 env[1446]: time="2024-07-02T02:30:04.282424965Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 02:30:04.282481 env[1446]: time="2024-07-02T02:30:04.282465764Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282541 env[1446]: time="2024-07-02T02:30:04.282481963Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282541 env[1446]: time="2024-07-02T02:30:04.282497923Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282541 env[1446]: time="2024-07-02T02:30:04.282510002Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282894 env[1446]: time="2024-07-02T02:30:04.282864751Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282894 env[1446]: time="2024-07-02T02:30:04.282890310Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282963 env[1446]: time="2024-07-02T02:30:04.282903910Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282963 env[1446]: time="2024-07-02T02:30:04.282917669Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.282963 env[1446]: time="2024-07-02T02:30:04.282930349Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 02:30:04.283079 env[1446]: time="2024-07-02T02:30:04.283052385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 02:30:04.283155 env[1446]: time="2024-07-02T02:30:04.283135182Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 02:30:04.283414 env[1446]: time="2024-07-02T02:30:04.283387934Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 02:30:04.283472 env[1446]: time="2024-07-02T02:30:04.283419253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283472 env[1446]: time="2024-07-02T02:30:04.283437653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 02:30:04.283522 env[1446]: time="2024-07-02T02:30:04.283479451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283522 env[1446]: time="2024-07-02T02:30:04.283492891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283522 env[1446]: time="2024-07-02T02:30:04.283505050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283522 env[1446]: time="2024-07-02T02:30:04.283515850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283620 env[1446]: time="2024-07-02T02:30:04.283527730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283620 env[1446]: time="2024-07-02T02:30:04.283540329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283620 env[1446]: time="2024-07-02T02:30:04.283552009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283620 env[1446]: time="2024-07-02T02:30:04.283563209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283620 env[1446]: time="2024-07-02T02:30:04.283575928Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 02:30:04.283722 env[1446]: time="2024-07-02T02:30:04.283687725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283722 env[1446]: time="2024-07-02T02:30:04.283703924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283722 env[1446]: time="2024-07-02T02:30:04.283715844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.283774 env[1446]: time="2024-07-02T02:30:04.283727203Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 02:30:04.283774 env[1446]: time="2024-07-02T02:30:04.283741483Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 02:30:04.283774 env[1446]: time="2024-07-02T02:30:04.283753002Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 02:30:04.283774 env[1446]: time="2024-07-02T02:30:04.283769762Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 02:30:04.283848 env[1446]: time="2024-07-02T02:30:04.283802561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 02:30:04.284063 env[1446]: time="2024-07-02T02:30:04.283999714Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.284061912Z" level=info msg="Connect containerd service" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.284098271Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.286802624Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.286961059Z" level=info msg="Start subscribing containerd event" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.287013857Z" level=info msg="Start recovering state" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.287077015Z" level=info msg="Start event monitor" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.287094535Z" level=info msg="Start snapshots syncer" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.287104734Z" level=info msg="Start cni network conf syncer for default" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.287111934Z" level=info msg="Start streaming server" Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.291510712Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 02:30:04.309548 env[1446]: time="2024-07-02T02:30:04.291590390Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 02:30:04.291718 systemd[1]: Started containerd.service. Jul 2 02:30:04.312411 env[1446]: time="2024-07-02T02:30:04.291650588Z" level=info msg="containerd successfully booted in 0.079135s" Jul 2 02:30:04.410366 dbus-daemon[1421]: [system] SELinux support is enabled Jul 2 02:30:04.416934 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 02:30:04.410522 systemd[1]: Started dbus.service. Jul 2 02:30:04.416418 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 02:30:04.416439 systemd[1]: Reached target system-config.target. Jul 2 02:30:04.425611 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 02:30:04.425628 systemd[1]: Reached target user-config.target. Jul 2 02:30:04.432254 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jul 2 02:30:04.432965 systemd[1]: Started systemd-logind.service. Jul 2 02:30:04.439836 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 02:30:04.604293 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 02:30:04.776836 tar[1443]: linux-arm64/LICENSE Jul 2 02:30:04.776937 tar[1443]: linux-arm64/README.md Jul 2 02:30:04.781169 systemd[1]: Finished prepare-helm.service. Jul 2 02:30:04.900765 systemd[1]: Started kubelet.service. Jul 2 02:30:05.213596 update_engine[1437]: I0702 02:30:05.176862 1437 main.cc:92] Flatcar Update Engine starting Jul 2 02:30:05.314513 systemd[1]: Started update-engine.service. Jul 2 02:30:05.314799 update_engine[1437]: I0702 02:30:05.314543 1437 update_check_scheduler.cc:74] Next update check in 6m40s Jul 2 02:30:05.321420 systemd[1]: Started locksmithd.service. Jul 2 02:30:05.403665 kubelet[1530]: E0702 02:30:05.403604 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:30:05.405954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:30:05.406069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:30:06.210407 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 02:30:06.227284 systemd[1]: Finished sshd-keygen.service. Jul 2 02:30:06.233301 systemd[1]: Starting issuegen.service... Jul 2 02:30:06.238143 systemd[1]: Started waagent.service. Jul 2 02:30:06.242809 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 02:30:06.242965 systemd[1]: Finished issuegen.service. Jul 2 02:30:06.248748 systemd[1]: Starting systemd-user-sessions.service... Jul 2 02:30:06.277147 systemd[1]: Finished systemd-user-sessions.service. Jul 2 02:30:06.283774 systemd[1]: Started getty@tty1.service. Jul 2 02:30:06.289378 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 2 02:30:06.294501 systemd[1]: Reached target getty.target. Jul 2 02:30:06.298885 systemd[1]: Reached target multi-user.target. Jul 2 02:30:06.304371 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 02:30:06.316963 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 02:30:06.317129 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 02:30:06.322862 systemd[1]: Startup finished in 984ms (kernel) + 18.320s (initrd) + 31.207s (userspace) = 50.511s. Jul 2 02:30:06.641048 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 02:30:06.952286 login[1555]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jul 2 02:30:06.964538 login[1556]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 02:30:07.011580 systemd[1]: Created slice user-500.slice. Jul 2 02:30:07.012712 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 02:30:07.015788 systemd-logind[1435]: New session 1 of user core. Jul 2 02:30:07.032529 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 02:30:07.033998 systemd[1]: Starting user@500.service... Jul 2 02:30:07.070436 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:30:07.362416 systemd[1559]: Queued start job for default target default.target. Jul 2 02:30:07.362930 systemd[1559]: Reached target paths.target. Jul 2 02:30:07.362951 systemd[1559]: Reached target sockets.target. Jul 2 02:30:07.362962 systemd[1559]: Reached target timers.target. Jul 2 02:30:07.362971 systemd[1559]: Reached target basic.target. Jul 2 02:30:07.363014 systemd[1559]: Reached target default.target. Jul 2 02:30:07.363039 systemd[1559]: Startup finished in 286ms. Jul 2 02:30:07.363083 systemd[1]: Started user@500.service. Jul 2 02:30:07.364022 systemd[1]: Started session-1.scope. Jul 2 02:30:07.952699 login[1555]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 02:30:07.956773 systemd-logind[1435]: New session 2 of user core. Jul 2 02:30:07.957167 systemd[1]: Started session-2.scope. Jul 2 02:30:13.965851 waagent[1553]: 2024-07-02T02:30:13.965745Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 2 02:30:13.972747 waagent[1553]: 2024-07-02T02:30:13.972677Z INFO Daemon Daemon OS: flatcar 3510.3.5 Jul 2 02:30:13.977489 waagent[1553]: 2024-07-02T02:30:13.977432Z INFO Daemon Daemon Python: 3.9.16 Jul 2 02:30:13.982487 waagent[1553]: 2024-07-02T02:30:13.982411Z INFO Daemon Daemon Run daemon Jul 2 02:30:13.987132 waagent[1553]: 2024-07-02T02:30:13.987056Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.5' Jul 2 02:30:14.005429 waagent[1553]: 2024-07-02T02:30:14.005288Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 02:30:14.020753 waagent[1553]: 2024-07-02T02:30:14.020626Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 02:30:14.030278 waagent[1553]: 2024-07-02T02:30:14.030211Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 02:30:14.035516 waagent[1553]: 2024-07-02T02:30:14.035452Z INFO Daemon Daemon Using waagent for provisioning Jul 2 02:30:14.041096 waagent[1553]: 2024-07-02T02:30:14.041033Z INFO Daemon Daemon Activate resource disk Jul 2 02:30:14.045752 waagent[1553]: 2024-07-02T02:30:14.045693Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 02:30:14.060087 waagent[1553]: 2024-07-02T02:30:14.060022Z INFO Daemon Daemon Found device: None Jul 2 02:30:14.064803 waagent[1553]: 2024-07-02T02:30:14.064737Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 02:30:14.072952 waagent[1553]: 2024-07-02T02:30:14.072892Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 02:30:14.084701 waagent[1553]: 2024-07-02T02:30:14.084638Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 02:30:14.090653 waagent[1553]: 2024-07-02T02:30:14.090594Z INFO Daemon Daemon Running default provisioning handler Jul 2 02:30:14.104600 waagent[1553]: 2024-07-02T02:30:14.104474Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 02:30:14.119591 waagent[1553]: 2024-07-02T02:30:14.119462Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 02:30:14.129700 waagent[1553]: 2024-07-02T02:30:14.129631Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 02:30:14.134917 waagent[1553]: 2024-07-02T02:30:14.134857Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 02:30:14.167807 waagent[1553]: 2024-07-02T02:30:14.165292Z INFO Daemon Daemon Successfully mounted dvd Jul 2 02:30:14.395150 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 02:30:14.449445 waagent[1553]: 2024-07-02T02:30:14.449270Z INFO Daemon Daemon Detect protocol endpoint Jul 2 02:30:14.454405 waagent[1553]: 2024-07-02T02:30:14.454337Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 02:30:14.460321 waagent[1553]: 2024-07-02T02:30:14.460254Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 02:30:14.466717 waagent[1553]: 2024-07-02T02:30:14.466656Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 02:30:14.472080 waagent[1553]: 2024-07-02T02:30:14.472020Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 02:30:14.477413 waagent[1553]: 2024-07-02T02:30:14.477353Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 02:30:14.609695 waagent[1553]: 2024-07-02T02:30:14.609625Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 02:30:14.618879 waagent[1553]: 2024-07-02T02:30:14.618833Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 02:30:14.624629 waagent[1553]: 2024-07-02T02:30:14.624555Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 02:30:15.211982 waagent[1553]: 2024-07-02T02:30:15.211832Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 02:30:15.242438 waagent[1553]: 2024-07-02T02:30:15.242359Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 2 02:30:15.249200 waagent[1553]: 2024-07-02T02:30:15.249125Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 2 02:30:15.328884 waagent[1553]: 2024-07-02T02:30:15.328755Z INFO Daemon Daemon Found private key matching thumbprint FB6825BE99E7A9A0ACC6264C5CB0E87293212079 Jul 2 02:30:15.337372 waagent[1553]: 2024-07-02T02:30:15.337265Z INFO Daemon Daemon Certificate with thumbprint 7DE50E3266A9B0773B311DDFE5CF79A2FDA68241 has no matching private key. Jul 2 02:30:15.347070 waagent[1553]: 2024-07-02T02:30:15.346986Z INFO Daemon Daemon Fetch goal state completed Jul 2 02:30:15.400582 waagent[1553]: 2024-07-02T02:30:15.400520Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: dfc52c75-3622-49d0-b4cc-288b174e6e36 New eTag: 14215447273269325004] Jul 2 02:30:15.411441 waagent[1553]: 2024-07-02T02:30:15.411355Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 02:30:15.426618 waagent[1553]: 2024-07-02T02:30:15.426553Z INFO Daemon Daemon Starting provisioning Jul 2 02:30:15.431643 waagent[1553]: 2024-07-02T02:30:15.431570Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 02:30:15.436629 waagent[1553]: 2024-07-02T02:30:15.436563Z INFO Daemon Daemon Set hostname [ci-3510.3.5-a-c92d6bc2c6] Jul 2 02:30:15.481439 waagent[1553]: 2024-07-02T02:30:15.481271Z INFO Daemon Daemon Publish hostname [ci-3510.3.5-a-c92d6bc2c6] Jul 2 02:30:15.488094 waagent[1553]: 2024-07-02T02:30:15.488014Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 02:30:15.494441 waagent[1553]: 2024-07-02T02:30:15.494377Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 02:30:15.511690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 02:30:15.511842 systemd[1]: Stopped kubelet.service. Jul 2 02:30:15.513197 systemd[1]: Starting kubelet.service... Jul 2 02:30:15.514097 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 2 02:30:15.514260 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 2 02:30:15.514347 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 2 02:30:15.514586 systemd[1]: Stopping systemd-networkd.service... Jul 2 02:30:15.518362 systemd-networkd[1217]: eth0: DHCPv6 lease lost Jul 2 02:30:15.519733 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 02:30:15.519940 systemd[1]: Stopped systemd-networkd.service. Jul 2 02:30:15.522151 systemd[1]: Starting systemd-networkd.service... Jul 2 02:30:15.551992 systemd-networkd[1606]: enP8889s1: Link UP Jul 2 02:30:15.552005 systemd-networkd[1606]: enP8889s1: Gained carrier Jul 2 02:30:15.552928 systemd-networkd[1606]: eth0: Link UP Jul 2 02:30:15.552936 systemd-networkd[1606]: eth0: Gained carrier Jul 2 02:30:15.553245 systemd-networkd[1606]: lo: Link UP Jul 2 02:30:15.553253 systemd-networkd[1606]: lo: Gained carrier Jul 2 02:30:15.553649 systemd-networkd[1606]: eth0: Gained IPv6LL Jul 2 02:30:15.554057 systemd-networkd[1606]: Enumeration completed Jul 2 02:30:15.557172 waagent[1553]: 2024-07-02T02:30:15.555618Z INFO Daemon Daemon Create user account if not exists Jul 2 02:30:15.554171 systemd[1]: Started systemd-networkd.service. Jul 2 02:30:15.555945 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 02:30:15.563551 waagent[1553]: 2024-07-02T02:30:15.563445Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 02:30:15.569852 systemd-networkd[1606]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 02:30:15.570567 waagent[1553]: 2024-07-02T02:30:15.570466Z INFO Daemon Daemon Configure sudoer Jul 2 02:30:15.575522 waagent[1553]: 2024-07-02T02:30:15.575411Z INFO Daemon Daemon Configure sshd Jul 2 02:30:15.581622 waagent[1553]: 2024-07-02T02:30:15.581538Z INFO Daemon Daemon Deploy ssh public key. Jul 2 02:30:15.594429 systemd-networkd[1606]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 02:30:15.597050 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 02:30:15.620276 systemd[1]: Started kubelet.service. Jul 2 02:30:15.670116 kubelet[1613]: E0702 02:30:15.670038 1613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:30:15.673260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:30:15.673416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:30:16.056872 waagent[1553]: 2024-07-02T02:30:16.056729Z INFO Daemon Daemon Decode custom data Jul 2 02:30:16.061919 waagent[1553]: 2024-07-02T02:30:16.061837Z INFO Daemon Daemon Save custom data Jul 2 02:30:17.250947 waagent[1553]: 2024-07-02T02:30:17.250882Z INFO Daemon Daemon Provisioning complete Jul 2 02:30:17.269479 waagent[1553]: 2024-07-02T02:30:17.269410Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 02:30:17.275647 waagent[1553]: 2024-07-02T02:30:17.275579Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 02:30:17.285872 waagent[1553]: 2024-07-02T02:30:17.285804Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 2 02:30:17.583739 waagent[1623]: 2024-07-02T02:30:17.583598Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 2 02:30:17.584810 waagent[1623]: 2024-07-02T02:30:17.584756Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 02:30:17.585046 waagent[1623]: 2024-07-02T02:30:17.584998Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 02:30:17.597278 waagent[1623]: 2024-07-02T02:30:17.597210Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 2 02:30:17.597584 waagent[1623]: 2024-07-02T02:30:17.597534Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 2 02:30:17.668451 waagent[1623]: 2024-07-02T02:30:17.668323Z INFO ExtHandler ExtHandler Found private key matching thumbprint FB6825BE99E7A9A0ACC6264C5CB0E87293212079 Jul 2 02:30:17.668833 waagent[1623]: 2024-07-02T02:30:17.668782Z INFO ExtHandler ExtHandler Certificate with thumbprint 7DE50E3266A9B0773B311DDFE5CF79A2FDA68241 has no matching private key. Jul 2 02:30:17.669205 waagent[1623]: 2024-07-02T02:30:17.669095Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 2 02:30:17.684760 waagent[1623]: 2024-07-02T02:30:17.684705Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 05ab866e-ea7e-468c-8ab7-b068a53bbe3b New eTag: 14215447273269325004] Jul 2 02:30:17.685461 waagent[1623]: 2024-07-02T02:30:17.685405Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 02:30:17.769739 waagent[1623]: 2024-07-02T02:30:17.769599Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 02:30:17.790479 waagent[1623]: 2024-07-02T02:30:17.790392Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1623 Jul 2 02:30:17.794363 waagent[1623]: 2024-07-02T02:30:17.794283Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 02:30:17.795851 waagent[1623]: 2024-07-02T02:30:17.795795Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 02:30:17.944999 waagent[1623]: 2024-07-02T02:30:17.944942Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 02:30:17.945633 waagent[1623]: 2024-07-02T02:30:17.945577Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 02:30:17.953446 waagent[1623]: 2024-07-02T02:30:17.953391Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 02:30:17.954072 waagent[1623]: 2024-07-02T02:30:17.954015Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 02:30:17.955356 waagent[1623]: 2024-07-02T02:30:17.955270Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 2 02:30:17.956787 waagent[1623]: 2024-07-02T02:30:17.956718Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 02:30:17.957077 waagent[1623]: 2024-07-02T02:30:17.957006Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 02:30:17.957628 waagent[1623]: 2024-07-02T02:30:17.957554Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 02:30:17.958220 waagent[1623]: 2024-07-02T02:30:17.958154Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 02:30:17.958967 waagent[1623]: 2024-07-02T02:30:17.958900Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 02:30:17.959079 waagent[1623]: 2024-07-02T02:30:17.959020Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 02:30:17.959079 waagent[1623]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 02:30:17.959079 waagent[1623]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 02:30:17.959079 waagent[1623]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 02:30:17.959079 waagent[1623]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 02:30:17.959079 waagent[1623]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 02:30:17.959079 waagent[1623]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 02:30:17.959389 waagent[1623]: 2024-07-02T02:30:17.959286Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 02:30:17.961507 waagent[1623]: 2024-07-02T02:30:17.961337Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 02:30:17.961914 waagent[1623]: 2024-07-02T02:30:17.961847Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 02:30:17.962989 waagent[1623]: 2024-07-02T02:30:17.962912Z INFO EnvHandler ExtHandler Configure routes Jul 2 02:30:17.963163 waagent[1623]: 2024-07-02T02:30:17.963109Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 02:30:17.963775 waagent[1623]: 2024-07-02T02:30:17.963693Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 02:30:17.964237 waagent[1623]: 2024-07-02T02:30:17.964173Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 02:30:17.964413 waagent[1623]: 2024-07-02T02:30:17.964353Z INFO EnvHandler ExtHandler Gateway:None Jul 2 02:30:17.964588 waagent[1623]: 2024-07-02T02:30:17.964525Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 02:30:17.967427 waagent[1623]: 2024-07-02T02:30:17.967349Z INFO EnvHandler ExtHandler Routes:None Jul 2 02:30:17.978558 waagent[1623]: 2024-07-02T02:30:17.978474Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 2 02:30:17.979285 waagent[1623]: 2024-07-02T02:30:17.979236Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 02:30:17.980361 waagent[1623]: 2024-07-02T02:30:17.980289Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 2 02:30:18.021942 waagent[1623]: 2024-07-02T02:30:18.021881Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 2 02:30:18.093877 waagent[1623]: 2024-07-02T02:30:18.093742Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1606' Jul 2 02:30:18.189108 waagent[1623]: 2024-07-02T02:30:18.188964Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 02:30:18.189108 waagent[1623]: Executing ['ip', '-a', '-o', 'link']: Jul 2 02:30:18.189108 waagent[1623]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 02:30:18.189108 waagent[1623]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:d8:06 brd ff:ff:ff:ff:ff:ff Jul 2 02:30:18.189108 waagent[1623]: 3: enP8889s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:d8:06 brd ff:ff:ff:ff:ff:ff\ altname enP8889p0s2 Jul 2 02:30:18.189108 waagent[1623]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 02:30:18.189108 waagent[1623]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 02:30:18.189108 waagent[1623]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 02:30:18.189108 waagent[1623]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 02:30:18.189108 waagent[1623]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 02:30:18.189108 waagent[1623]: 2: eth0 inet6 fe80::20d:3aff:fe6e:d806/64 scope link \ valid_lft forever preferred_lft forever Jul 2 02:30:18.346728 waagent[1623]: 2024-07-02T02:30:18.346629Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.11.1.4 -- exiting Jul 2 02:30:19.290536 waagent[1553]: 2024-07-02T02:30:19.290410Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 2 02:30:19.295013 waagent[1553]: 2024-07-02T02:30:19.294963Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.11.1.4 to be the latest agent Jul 2 02:30:20.468920 waagent[1654]: 2024-07-02T02:30:20.468827Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.11.1.4) Jul 2 02:30:20.469940 waagent[1654]: 2024-07-02T02:30:20.469884Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.5 Jul 2 02:30:20.470171 waagent[1654]: 2024-07-02T02:30:20.470123Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 2 02:30:20.470409 waagent[1654]: 2024-07-02T02:30:20.470360Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jul 2 02:30:20.478200 waagent[1654]: 2024-07-02T02:30:20.478102Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 02:30:20.478728 waagent[1654]: 2024-07-02T02:30:20.478672Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 02:30:20.478972 waagent[1654]: 2024-07-02T02:30:20.478924Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 02:30:20.492293 waagent[1654]: 2024-07-02T02:30:20.492227Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 02:30:20.508258 waagent[1654]: 2024-07-02T02:30:20.508205Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 02:30:20.509381 waagent[1654]: 2024-07-02T02:30:20.509306Z INFO ExtHandler Jul 2 02:30:20.509622 waagent[1654]: 2024-07-02T02:30:20.509572Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 64d5a9d2-6596-4117-8b83-f0dff910f05e eTag: 14215447273269325004 source: Fabric] Jul 2 02:30:20.510447 waagent[1654]: 2024-07-02T02:30:20.510389Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 02:30:20.511766 waagent[1654]: 2024-07-02T02:30:20.511707Z INFO ExtHandler Jul 2 02:30:20.511993 waagent[1654]: 2024-07-02T02:30:20.511945Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 02:30:20.518723 waagent[1654]: 2024-07-02T02:30:20.518678Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 02:30:20.519246 waagent[1654]: 2024-07-02T02:30:20.519201Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 02:30:20.537369 waagent[1654]: 2024-07-02T02:30:20.537289Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 2 02:30:20.608008 waagent[1654]: 2024-07-02T02:30:20.607874Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FB6825BE99E7A9A0ACC6264C5CB0E87293212079', 'hasPrivateKey': True} Jul 2 02:30:20.609263 waagent[1654]: 2024-07-02T02:30:20.609204Z INFO ExtHandler Downloaded certificate {'thumbprint': '7DE50E3266A9B0773B311DDFE5CF79A2FDA68241', 'hasPrivateKey': False} Jul 2 02:30:20.610459 waagent[1654]: 2024-07-02T02:30:20.610398Z INFO ExtHandler Fetch goal state completed Jul 2 02:30:20.631184 waagent[1654]: 2024-07-02T02:30:20.631075Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Jul 2 02:30:20.643433 waagent[1654]: 2024-07-02T02:30:20.643350Z INFO ExtHandler ExtHandler WALinuxAgent-2.11.1.4 running as process 1654 Jul 2 02:30:20.647154 waagent[1654]: 2024-07-02T02:30:20.647095Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 02:30:20.648802 waagent[1654]: 2024-07-02T02:30:20.648744Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 02:30:20.653439 waagent[1654]: 2024-07-02T02:30:20.653389Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 02:30:20.653918 waagent[1654]: 2024-07-02T02:30:20.653863Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 02:30:20.661902 waagent[1654]: 2024-07-02T02:30:20.661855Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 02:30:20.662534 waagent[1654]: 2024-07-02T02:30:20.662480Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 02:30:20.668398 waagent[1654]: 2024-07-02T02:30:20.668287Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 02:30:20.669544 waagent[1654]: 2024-07-02T02:30:20.669483Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 02:30:20.671168 waagent[1654]: 2024-07-02T02:30:20.671097Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 02:30:20.671430 waagent[1654]: 2024-07-02T02:30:20.671356Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 02:30:20.672004 waagent[1654]: 2024-07-02T02:30:20.671930Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 02:30:20.672669 waagent[1654]: 2024-07-02T02:30:20.672596Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 02:30:20.672989 waagent[1654]: 2024-07-02T02:30:20.672927Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 02:30:20.672989 waagent[1654]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 02:30:20.672989 waagent[1654]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 02:30:20.672989 waagent[1654]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 02:30:20.672989 waagent[1654]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 02:30:20.672989 waagent[1654]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 02:30:20.672989 waagent[1654]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 02:30:20.675074 waagent[1654]: 2024-07-02T02:30:20.674951Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 02:30:20.675670 waagent[1654]: 2024-07-02T02:30:20.675598Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 02:30:20.675873 waagent[1654]: 2024-07-02T02:30:20.675815Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 02:30:20.676581 waagent[1654]: 2024-07-02T02:30:20.676505Z INFO EnvHandler ExtHandler Configure routes Jul 2 02:30:20.676749 waagent[1654]: 2024-07-02T02:30:20.676699Z INFO EnvHandler ExtHandler Gateway:None Jul 2 02:30:20.676863 waagent[1654]: 2024-07-02T02:30:20.676822Z INFO EnvHandler ExtHandler Routes:None Jul 2 02:30:20.680854 waagent[1654]: 2024-07-02T02:30:20.680680Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 02:30:20.681127 waagent[1654]: 2024-07-02T02:30:20.681040Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 02:30:20.682319 waagent[1654]: 2024-07-02T02:30:20.682213Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 02:30:20.682546 waagent[1654]: 2024-07-02T02:30:20.682468Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 02:30:20.682865 waagent[1654]: 2024-07-02T02:30:20.682790Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 02:30:20.696564 waagent[1654]: 2024-07-02T02:30:20.696487Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 2 02:30:20.709616 waagent[1654]: 2024-07-02T02:30:20.709528Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 02:30:20.709616 waagent[1654]: Executing ['ip', '-a', '-o', 'link']: Jul 2 02:30:20.709616 waagent[1654]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 02:30:20.709616 waagent[1654]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:d8:06 brd ff:ff:ff:ff:ff:ff Jul 2 02:30:20.709616 waagent[1654]: 3: enP8889s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:d8:06 brd ff:ff:ff:ff:ff:ff\ altname enP8889p0s2 Jul 2 02:30:20.709616 waagent[1654]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 02:30:20.709616 waagent[1654]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 02:30:20.709616 waagent[1654]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 02:30:20.709616 waagent[1654]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 02:30:20.709616 waagent[1654]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 02:30:20.709616 waagent[1654]: 2: eth0 inet6 fe80::20d:3aff:fe6e:d806/64 scope link \ valid_lft forever preferred_lft forever Jul 2 02:30:20.715845 waagent[1654]: 2024-07-02T02:30:20.715762Z INFO ExtHandler ExtHandler Jul 2 02:30:20.720926 waagent[1654]: 2024-07-02T02:30:20.720725Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6e356af4-f685-465a-be92-4ab71c52dc7b correlation 31654a67-48ea-42d3-a828-c1b3249fd836 created: 2024-07-02T02:28:30.409653Z] Jul 2 02:30:20.727139 waagent[1654]: 2024-07-02T02:30:20.727058Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 02:30:20.732267 waagent[1654]: 2024-07-02T02:30:20.732196Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 16 ms] Jul 2 02:30:20.752471 waagent[1654]: 2024-07-02T02:30:20.752403Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 2 02:30:20.829821 waagent[1654]: 2024-07-02T02:30:20.829745Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.11.1.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CEDD22C3-7FBB-46B2-B753-329D3EE942FE;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Jul 2 02:30:21.118501 waagent[1654]: 2024-07-02T02:30:21.118305Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 2 02:30:21.118501 waagent[1654]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 02:30:21.118501 waagent[1654]: pkts bytes target prot opt in out source destination Jul 2 02:30:21.118501 waagent[1654]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 02:30:21.118501 waagent[1654]: pkts bytes target prot opt in out source destination Jul 2 02:30:21.118501 waagent[1654]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 02:30:21.118501 waagent[1654]: pkts bytes target prot opt in out source destination Jul 2 02:30:21.118501 waagent[1654]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 02:30:21.118501 waagent[1654]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 02:30:21.118501 waagent[1654]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 02:30:21.125930 waagent[1654]: 2024-07-02T02:30:21.125830Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 02:30:21.125930 waagent[1654]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 02:30:21.125930 waagent[1654]: pkts bytes target prot opt in out source destination Jul 2 02:30:21.125930 waagent[1654]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 02:30:21.125930 waagent[1654]: pkts bytes target prot opt in out source destination Jul 2 02:30:21.125930 waagent[1654]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 02:30:21.125930 waagent[1654]: pkts bytes target prot opt in out source destination Jul 2 02:30:21.125930 waagent[1654]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 02:30:21.125930 waagent[1654]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 02:30:21.125930 waagent[1654]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 02:30:21.126710 waagent[1654]: 2024-07-02T02:30:21.126662Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 02:30:25.878119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 02:30:25.878304 systemd[1]: Stopped kubelet.service. Jul 2 02:30:25.879662 systemd[1]: Starting kubelet.service... Jul 2 02:30:26.130078 systemd[1]: Started kubelet.service. Jul 2 02:30:26.166268 kubelet[1709]: E0702 02:30:26.166203 1709 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:30:26.168440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:30:26.168570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:30:36.378209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 02:30:36.378408 systemd[1]: Stopped kubelet.service. Jul 2 02:30:36.379789 systemd[1]: Starting kubelet.service... Jul 2 02:30:36.615740 systemd[1]: Started kubelet.service. Jul 2 02:30:36.657784 kubelet[1719]: E0702 02:30:36.657679 1719 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:30:36.660162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:30:36.660281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:30:39.159468 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 2 02:30:46.878237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 02:30:46.878435 systemd[1]: Stopped kubelet.service. Jul 2 02:30:46.879786 systemd[1]: Starting kubelet.service... Jul 2 02:30:47.131624 systemd[1]: Started kubelet.service. Jul 2 02:30:47.170836 kubelet[1729]: E0702 02:30:47.170775 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:30:47.173026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:30:47.173146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:30:50.847271 update_engine[1437]: I0702 02:30:50.847215 1437 update_attempter.cc:509] Updating boot flags... Jul 2 02:30:57.378124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 02:30:57.378291 systemd[1]: Stopped kubelet.service. Jul 2 02:30:57.379668 systemd[1]: Starting kubelet.service... Jul 2 02:30:57.502090 systemd[1]: Started kubelet.service. Jul 2 02:30:57.543380 kubelet[1778]: E0702 02:30:57.543336 1778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:30:57.546156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:30:57.546274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:31:07.186702 systemd[1]: Created slice system-sshd.slice. Jul 2 02:31:07.187793 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:54624.service. Jul 2 02:31:07.628136 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 02:31:07.628300 systemd[1]: Stopped kubelet.service. Jul 2 02:31:07.629705 systemd[1]: Starting kubelet.service... Jul 2 02:31:07.866705 systemd[1]: Started kubelet.service. Jul 2 02:31:07.904862 sshd[1785]: Accepted publickey for core from 10.200.16.10 port 54624 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:31:07.910407 kubelet[1791]: E0702 02:31:07.910359 1791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:31:07.912653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:31:07.912772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:31:08.096151 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:31:08.100196 systemd-logind[1435]: New session 3 of user core. Jul 2 02:31:08.100654 systemd[1]: Started session-3.scope. Jul 2 02:31:08.422475 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:59886.service. Jul 2 02:31:08.835221 sshd[1800]: Accepted publickey for core from 10.200.16.10 port 59886 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:31:08.836848 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:31:08.841355 systemd-logind[1435]: New session 4 of user core. Jul 2 02:31:08.841638 systemd[1]: Started session-4.scope. Jul 2 02:31:09.147807 sshd[1800]: pam_unix(sshd:session): session closed for user core Jul 2 02:31:09.150467 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:59886.service: Deactivated successfully. Jul 2 02:31:09.151162 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 02:31:09.151675 systemd-logind[1435]: Session 4 logged out. Waiting for processes to exit. Jul 2 02:31:09.152604 systemd-logind[1435]: Removed session 4. Jul 2 02:31:09.215729 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:59902.service. Jul 2 02:31:09.628415 sshd[1806]: Accepted publickey for core from 10.200.16.10 port 59902 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:31:09.629848 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:31:09.633583 systemd-logind[1435]: New session 5 of user core. Jul 2 02:31:09.634001 systemd[1]: Started session-5.scope. Jul 2 02:31:09.925118 sshd[1806]: pam_unix(sshd:session): session closed for user core Jul 2 02:31:09.927768 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:59902.service: Deactivated successfully. Jul 2 02:31:09.928430 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 02:31:09.928943 systemd-logind[1435]: Session 5 logged out. Waiting for processes to exit. Jul 2 02:31:09.929764 systemd-logind[1435]: Removed session 5. Jul 2 02:31:09.998572 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:59910.service. Jul 2 02:31:10.443715 sshd[1812]: Accepted publickey for core from 10.200.16.10 port 59910 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:31:10.444989 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:31:10.448772 systemd-logind[1435]: New session 6 of user core. Jul 2 02:31:10.449188 systemd[1]: Started session-6.scope. Jul 2 02:31:10.776881 sshd[1812]: pam_unix(sshd:session): session closed for user core Jul 2 02:31:10.779683 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:59910.service: Deactivated successfully. Jul 2 02:31:10.780375 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 02:31:10.780894 systemd-logind[1435]: Session 6 logged out. Waiting for processes to exit. Jul 2 02:31:10.781693 systemd-logind[1435]: Removed session 6. Jul 2 02:31:10.850070 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:59924.service. Jul 2 02:31:11.294604 sshd[1818]: Accepted publickey for core from 10.200.16.10 port 59924 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:31:11.295879 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:31:11.299627 systemd-logind[1435]: New session 7 of user core. Jul 2 02:31:11.300038 systemd[1]: Started session-7.scope. Jul 2 02:31:12.086248 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 02:31:12.086815 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 02:31:12.106230 systemd[1]: Starting docker.service... Jul 2 02:31:12.136080 env[1831]: time="2024-07-02T02:31:12.136028320Z" level=info msg="Starting up" Jul 2 02:31:12.137146 env[1831]: time="2024-07-02T02:31:12.137123308Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 02:31:12.137255 env[1831]: time="2024-07-02T02:31:12.137233387Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 02:31:12.137415 env[1831]: time="2024-07-02T02:31:12.137397665Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 02:31:12.137530 env[1831]: time="2024-07-02T02:31:12.137515664Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 02:31:12.138995 env[1831]: time="2024-07-02T02:31:12.138972809Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 02:31:12.139153 env[1831]: time="2024-07-02T02:31:12.139096208Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 02:31:12.139229 env[1831]: time="2024-07-02T02:31:12.139214486Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 02:31:12.139281 env[1831]: time="2024-07-02T02:31:12.139269966Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 02:31:12.145702 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2429722311-merged.mount: Deactivated successfully. Jul 2 02:31:12.240816 env[1831]: time="2024-07-02T02:31:12.240774950Z" level=info msg="Loading containers: start." Jul 2 02:31:12.432336 kernel: Initializing XFRM netlink socket Jul 2 02:31:12.467194 env[1831]: time="2024-07-02T02:31:12.467163436Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 02:31:12.690628 systemd-networkd[1606]: docker0: Link UP Jul 2 02:31:12.706455 env[1831]: time="2024-07-02T02:31:12.706419067Z" level=info msg="Loading containers: done." Jul 2 02:31:12.716723 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2640875695-merged.mount: Deactivated successfully. Jul 2 02:31:12.727687 env[1831]: time="2024-07-02T02:31:12.727643487Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 02:31:12.727860 env[1831]: time="2024-07-02T02:31:12.727836725Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 02:31:12.727957 env[1831]: time="2024-07-02T02:31:12.727936524Z" level=info msg="Daemon has completed initialization" Jul 2 02:31:12.909437 systemd[1]: Started docker.service. Jul 2 02:31:12.915235 env[1831]: time="2024-07-02T02:31:12.915163017Z" level=info msg="API listen on /run/docker.sock" Jul 2 02:31:17.598924 env[1446]: time="2024-07-02T02:31:17.598884480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 02:31:18.128216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 2 02:31:18.128415 systemd[1]: Stopped kubelet.service. Jul 2 02:31:18.129750 systemd[1]: Starting kubelet.service... Jul 2 02:31:18.206163 systemd[1]: Started kubelet.service. Jul 2 02:31:18.243908 kubelet[1956]: E0702 02:31:18.243836 1956 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:31:18.246246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:31:18.246384 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:31:18.796364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441213161.mount: Deactivated successfully. Jul 2 02:31:21.757442 env[1446]: time="2024-07-02T02:31:21.757394893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:21.761758 env[1446]: time="2024-07-02T02:31:21.761733617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:21.764595 env[1446]: time="2024-07-02T02:31:21.764444595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:21.769675 env[1446]: time="2024-07-02T02:31:21.769641032Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:21.770496 env[1446]: time="2024-07-02T02:31:21.770470385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 02:31:21.779530 env[1446]: time="2024-07-02T02:31:21.779501351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 02:31:24.764411 env[1446]: time="2024-07-02T02:31:24.764353143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:24.769605 env[1446]: time="2024-07-02T02:31:24.769567624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:24.773163 env[1446]: time="2024-07-02T02:31:24.773126716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:24.778095 env[1446]: time="2024-07-02T02:31:24.778067479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:24.778794 env[1446]: time="2024-07-02T02:31:24.778766913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 02:31:24.788060 env[1446]: time="2024-07-02T02:31:24.788033683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 02:31:26.913487 env[1446]: time="2024-07-02T02:31:26.913428669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:26.917547 env[1446]: time="2024-07-02T02:31:26.917520319Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:26.921128 env[1446]: time="2024-07-02T02:31:26.921095973Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:26.924161 env[1446]: time="2024-07-02T02:31:26.924129871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:26.925084 env[1446]: time="2024-07-02T02:31:26.925059064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 02:31:26.934332 env[1446]: time="2024-07-02T02:31:26.934278877Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 02:31:28.378207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 2 02:31:28.378397 systemd[1]: Stopped kubelet.service. Jul 2 02:31:28.379769 systemd[1]: Starting kubelet.service... Jul 2 02:31:28.695271 systemd[1]: Started kubelet.service. Jul 2 02:31:28.741933 kubelet[1983]: E0702 02:31:28.741888 1983 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:31:28.744280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:31:28.744430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:31:29.266077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37217418.mount: Deactivated successfully. Jul 2 02:31:29.709422 env[1446]: time="2024-07-02T02:31:29.709378529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:29.713271 env[1446]: time="2024-07-02T02:31:29.713233943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:29.716820 env[1446]: time="2024-07-02T02:31:29.716797839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:29.719049 env[1446]: time="2024-07-02T02:31:29.719027024Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:29.719527 env[1446]: time="2024-07-02T02:31:29.719502980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 02:31:29.728773 env[1446]: time="2024-07-02T02:31:29.728719238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 02:31:30.396370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19203492.mount: Deactivated successfully. Jul 2 02:31:32.262744 env[1446]: time="2024-07-02T02:31:32.262698604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.268522 env[1446]: time="2024-07-02T02:31:32.268497528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.271470 env[1446]: time="2024-07-02T02:31:32.271436390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.275502 env[1446]: time="2024-07-02T02:31:32.275476764Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.276159 env[1446]: time="2024-07-02T02:31:32.276129600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 02:31:32.285438 env[1446]: time="2024-07-02T02:31:32.285411182Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 02:31:32.847351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019835724.mount: Deactivated successfully. Jul 2 02:31:32.867754 env[1446]: time="2024-07-02T02:31:32.867693160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.874371 env[1446]: time="2024-07-02T02:31:32.874338518Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.878256 env[1446]: time="2024-07-02T02:31:32.878228294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.881853 env[1446]: time="2024-07-02T02:31:32.881816511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:32.882251 env[1446]: time="2024-07-02T02:31:32.882222869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 02:31:32.891682 env[1446]: time="2024-07-02T02:31:32.891652250Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 02:31:33.606588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627743099.mount: Deactivated successfully. Jul 2 02:31:38.878122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 2 02:31:38.878283 systemd[1]: Stopped kubelet.service. Jul 2 02:31:38.879669 systemd[1]: Starting kubelet.service... Jul 2 02:31:38.955902 systemd[1]: Started kubelet.service. Jul 2 02:31:38.995142 kubelet[2009]: E0702 02:31:38.995081 2009 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 02:31:38.997451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 02:31:38.997571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 02:31:39.481015 env[1446]: time="2024-07-02T02:31:39.480970542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:39.487921 env[1446]: time="2024-07-02T02:31:39.487888825Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:39.493050 env[1446]: time="2024-07-02T02:31:39.493024597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:39.496406 env[1446]: time="2024-07-02T02:31:39.496366459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:39.497227 env[1446]: time="2024-07-02T02:31:39.497196615Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 02:31:46.267866 systemd[1]: Stopped kubelet.service. Jul 2 02:31:46.270351 systemd[1]: Starting kubelet.service... Jul 2 02:31:46.295656 systemd[1]: Reloading. Jul 2 02:31:46.363823 /usr/lib/systemd/system-generators/torcx-generator[2103]: time="2024-07-02T02:31:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 02:31:46.363854 /usr/lib/systemd/system-generators/torcx-generator[2103]: time="2024-07-02T02:31:46Z" level=info msg="torcx already run" Jul 2 02:31:46.445120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 02:31:46.445140 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 02:31:46.460338 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 02:31:46.548809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 02:31:46.548878 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 02:31:46.549074 systemd[1]: Stopped kubelet.service. Jul 2 02:31:46.550732 systemd[1]: Starting kubelet.service... Jul 2 02:31:46.724102 systemd[1]: Started kubelet.service. Jul 2 02:31:46.768674 kubelet[2167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 02:31:46.768674 kubelet[2167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 02:31:46.768674 kubelet[2167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 02:31:46.768674 kubelet[2167]: I0702 02:31:46.767973 2167 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 02:31:47.173053 kubelet[2167]: I0702 02:31:47.173020 2167 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 02:31:47.173053 kubelet[2167]: I0702 02:31:47.173048 2167 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 02:31:47.173252 kubelet[2167]: I0702 02:31:47.173233 2167 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 02:31:47.200343 kubelet[2167]: I0702 02:31:47.200282 2167 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 02:31:47.200536 kubelet[2167]: E0702 02:31:47.200521 2167 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.208060 kubelet[2167]: I0702 02:31:47.208040 2167 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 02:31:47.209771 kubelet[2167]: I0702 02:31:47.209753 2167 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 02:31:47.210039 kubelet[2167]: I0702 02:31:47.210024 2167 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 02:31:47.210177 kubelet[2167]: I0702 02:31:47.210166 2167 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 02:31:47.210238 kubelet[2167]: I0702 02:31:47.210230 2167 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 02:31:47.210414 kubelet[2167]: I0702 02:31:47.210403 2167 state_mem.go:36] "Initialized new in-memory state store" Jul 2 02:31:47.213005 kubelet[2167]: I0702 02:31:47.212989 2167 kubelet.go:396] "Attempting to sync node with API server" Jul 2 02:31:47.213098 kubelet[2167]: I0702 02:31:47.213088 2167 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 02:31:47.213164 kubelet[2167]: I0702 02:31:47.213155 2167 kubelet.go:312] "Adding apiserver pod source" Jul 2 02:31:47.213226 kubelet[2167]: I0702 02:31:47.213218 2167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 02:31:47.215766 kubelet[2167]: I0702 02:31:47.215742 2167 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 02:31:47.215996 kubelet[2167]: I0702 02:31:47.215975 2167 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 02:31:47.217167 kubelet[2167]: W0702 02:31:47.217145 2167 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 02:31:47.217647 kubelet[2167]: I0702 02:31:47.217626 2167 server.go:1256] "Started kubelet" Jul 2 02:31:47.217768 kubelet[2167]: W0702 02:31:47.217733 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.217796 kubelet[2167]: E0702 02:31:47.217776 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.217852 kubelet[2167]: W0702 02:31:47.217828 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-c92d6bc2c6&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.217878 kubelet[2167]: E0702 02:31:47.217857 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-c92d6bc2c6&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.230379 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 02:31:47.232778 kubelet[2167]: I0702 02:31:47.232752 2167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 02:31:47.235321 kubelet[2167]: I0702 02:31:47.235290 2167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 02:31:47.235662 kubelet[2167]: I0702 02:31:47.235628 2167 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 02:31:47.236471 kubelet[2167]: I0702 02:31:47.236455 2167 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 02:31:47.236924 kubelet[2167]: I0702 02:31:47.236901 2167 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 02:31:47.237731 kubelet[2167]: I0702 02:31:47.237712 2167 server.go:461] "Adding debug handlers to kubelet server" Jul 2 02:31:47.238770 kubelet[2167]: I0702 02:31:47.238754 2167 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 02:31:47.238916 kubelet[2167]: I0702 02:31:47.238905 2167 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 02:31:47.241306 kubelet[2167]: W0702 02:31:47.241269 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.241449 kubelet[2167]: E0702 02:31:47.241435 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.241632 kubelet[2167]: E0702 02:31:47.241616 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Jul 2 02:31:47.242760 kubelet[2167]: E0702 02:31:47.242738 2167 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.5-a-c92d6bc2c6.17de44935156674a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.5-a-c92d6bc2c6,UID:ci-3510.3.5-a-c92d6bc2c6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-c92d6bc2c6,},FirstTimestamp:2024-07-02 02:31:47.217606474 +0000 UTC m=+0.489694716,LastTimestamp:2024-07-02 02:31:47.217606474 +0000 UTC m=+0.489694716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-c92d6bc2c6,}" Jul 2 02:31:47.243567 kubelet[2167]: I0702 02:31:47.243548 2167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 02:31:47.245132 kubelet[2167]: I0702 02:31:47.245118 2167 factory.go:221] Registration of the containerd container factory successfully Jul 2 02:31:47.245219 kubelet[2167]: I0702 02:31:47.245210 2167 factory.go:221] Registration of the systemd container factory successfully Jul 2 02:31:47.261776 kubelet[2167]: E0702 02:31:47.261752 2167 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 02:31:47.285037 kubelet[2167]: I0702 02:31:47.285007 2167 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 02:31:47.285037 kubelet[2167]: I0702 02:31:47.285025 2167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 02:31:47.285037 kubelet[2167]: I0702 02:31:47.285043 2167 state_mem.go:36] "Initialized new in-memory state store" Jul 2 02:31:47.289288 kubelet[2167]: I0702 02:31:47.289263 2167 policy_none.go:49] "None policy: Start" Jul 2 02:31:47.290076 kubelet[2167]: I0702 02:31:47.290061 2167 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 02:31:47.290291 kubelet[2167]: I0702 02:31:47.290279 2167 state_mem.go:35] "Initializing new in-memory state store" Jul 2 02:31:47.297914 systemd[1]: Created slice kubepods.slice. Jul 2 02:31:47.302978 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 02:31:47.305620 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 02:31:47.314048 kubelet[2167]: I0702 02:31:47.314024 2167 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 02:31:47.314266 kubelet[2167]: I0702 02:31:47.314240 2167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 02:31:47.316675 kubelet[2167]: E0702 02:31:47.316659 2167 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-c92d6bc2c6\" not found" Jul 2 02:31:47.325304 kubelet[2167]: I0702 02:31:47.325266 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 02:31:47.326554 kubelet[2167]: I0702 02:31:47.326528 2167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 02:31:47.326554 kubelet[2167]: I0702 02:31:47.326552 2167 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 02:31:47.326653 kubelet[2167]: I0702 02:31:47.326575 2167 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 02:31:47.326653 kubelet[2167]: E0702 02:31:47.326615 2167 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 02:31:47.327416 kubelet[2167]: W0702 02:31:47.327367 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.327416 kubelet[2167]: E0702 02:31:47.327420 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:47.338882 kubelet[2167]: I0702 02:31:47.338865 2167 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.339372 kubelet[2167]: E0702 02:31:47.339352 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.427697 kubelet[2167]: I0702 02:31:47.427619 2167 topology_manager.go:215] "Topology Admit Handler" podUID="9e4728c1d3c3744f633a965b67c79a05" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.429434 kubelet[2167]: I0702 02:31:47.429414 2167 topology_manager.go:215] "Topology Admit Handler" podUID="663c486df4f058777cbd27507371fb41" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.431204 kubelet[2167]: I0702 02:31:47.431187 2167 topology_manager.go:215] "Topology Admit Handler" podUID="e4ff5f0aded40d950ab46ad5c999f8be" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.435728 systemd[1]: Created slice kubepods-burstable-pod9e4728c1d3c3744f633a965b67c79a05.slice. Jul 2 02:31:47.440119 kubelet[2167]: I0702 02:31:47.440091 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e4728c1d3c3744f633a965b67c79a05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"9e4728c1d3c3744f633a965b67c79a05\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.440269 kubelet[2167]: I0702 02:31:47.440258 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.440418 kubelet[2167]: I0702 02:31:47.440408 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.440538 kubelet[2167]: I0702 02:31:47.440528 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.440651 kubelet[2167]: I0702 02:31:47.440641 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e4728c1d3c3744f633a965b67c79a05-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"9e4728c1d3c3744f633a965b67c79a05\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.440771 kubelet[2167]: I0702 02:31:47.440761 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e4728c1d3c3744f633a965b67c79a05-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"9e4728c1d3c3744f633a965b67c79a05\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.440889 kubelet[2167]: I0702 02:31:47.440879 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.441004 kubelet[2167]: I0702 02:31:47.440995 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.441117 kubelet[2167]: I0702 02:31:47.441108 2167 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4ff5f0aded40d950ab46ad5c999f8be-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"e4ff5f0aded40d950ab46ad5c999f8be\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.442857 kubelet[2167]: E0702 02:31:47.442841 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Jul 2 02:31:47.462257 systemd[1]: Created slice kubepods-burstable-pod663c486df4f058777cbd27507371fb41.slice. Jul 2 02:31:47.465958 systemd[1]: Created slice kubepods-burstable-pode4ff5f0aded40d950ab46ad5c999f8be.slice. Jul 2 02:31:47.540970 kubelet[2167]: I0702 02:31:47.540939 2167 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.541412 kubelet[2167]: E0702 02:31:47.541386 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.762403 env[1446]: time="2024-07-02T02:31:47.762286006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-c92d6bc2c6,Uid:9e4728c1d3c3744f633a965b67c79a05,Namespace:kube-system,Attempt:0,}" Jul 2 02:31:47.766152 env[1446]: time="2024-07-02T02:31:47.766123189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6,Uid:663c486df4f058777cbd27507371fb41,Namespace:kube-system,Attempt:0,}" Jul 2 02:31:47.768941 env[1446]: time="2024-07-02T02:31:47.768820416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-c92d6bc2c6,Uid:e4ff5f0aded40d950ab46ad5c999f8be,Namespace:kube-system,Attempt:0,}" Jul 2 02:31:47.843323 kubelet[2167]: E0702 02:31:47.843286 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Jul 2 02:31:47.943262 kubelet[2167]: I0702 02:31:47.942961 2167 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:47.943262 kubelet[2167]: E0702 02:31:47.943235 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:48.163607 kubelet[2167]: W0702 02:31:48.163551 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-c92d6bc2c6&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.163607 kubelet[2167]: E0702 02:31:48.163612 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-c92d6bc2c6&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.263008 kubelet[2167]: W0702 02:31:48.262948 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.263008 kubelet[2167]: E0702 02:31:48.262985 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.350167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817948149.mount: Deactivated successfully. Jul 2 02:31:48.374123 env[1446]: time="2024-07-02T02:31:48.374079267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.393063 env[1446]: time="2024-07-02T02:31:48.393016543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.395806 env[1446]: time="2024-07-02T02:31:48.395780091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.400538 env[1446]: time="2024-07-02T02:31:48.400505830Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.410259 env[1446]: time="2024-07-02T02:31:48.410226027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.413011 env[1446]: time="2024-07-02T02:31:48.412979935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.416804 env[1446]: time="2024-07-02T02:31:48.416720478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.419824 env[1446]: time="2024-07-02T02:31:48.419783424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.423396 env[1446]: time="2024-07-02T02:31:48.423356928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.433514 env[1446]: time="2024-07-02T02:31:48.433481963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.438373 env[1446]: time="2024-07-02T02:31:48.438349142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.450276 env[1446]: time="2024-07-02T02:31:48.450232809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:31:48.499223 kubelet[2167]: W0702 02:31:48.499169 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.499223 kubelet[2167]: E0702 02:31:48.499223 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.499508 env[1446]: time="2024-07-02T02:31:48.494054934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:31:48.499508 env[1446]: time="2024-07-02T02:31:48.494099174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:31:48.499508 env[1446]: time="2024-07-02T02:31:48.494109094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:31:48.499508 env[1446]: time="2024-07-02T02:31:48.494243414Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8 pid=2205 runtime=io.containerd.runc.v2 Jul 2 02:31:48.510263 env[1446]: time="2024-07-02T02:31:48.510188183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:31:48.510419 env[1446]: time="2024-07-02T02:31:48.510240343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:31:48.510419 env[1446]: time="2024-07-02T02:31:48.510251383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:31:48.510607 env[1446]: time="2024-07-02T02:31:48.510571341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/279c9e4e47292b2689598906a05d1b890b27c9b8fcddb92eba4d622fe142df1e pid=2229 runtime=io.containerd.runc.v2 Jul 2 02:31:48.518509 systemd[1]: Started cri-containerd-64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8.scope. Jul 2 02:31:48.537266 env[1446]: time="2024-07-02T02:31:48.537164983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:31:48.537547 env[1446]: time="2024-07-02T02:31:48.537478862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:31:48.537704 env[1446]: time="2024-07-02T02:31:48.537679021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:31:48.537957 env[1446]: time="2024-07-02T02:31:48.537926780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594 pid=2255 runtime=io.containerd.runc.v2 Jul 2 02:31:48.550366 systemd[1]: Started cri-containerd-279c9e4e47292b2689598906a05d1b890b27c9b8fcddb92eba4d622fe142df1e.scope. Jul 2 02:31:48.558688 systemd[1]: Started cri-containerd-3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594.scope. Jul 2 02:31:48.582760 env[1446]: time="2024-07-02T02:31:48.582724141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6,Uid:663c486df4f058777cbd27507371fb41,Namespace:kube-system,Attempt:0,} returns sandbox id \"64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8\"" Jul 2 02:31:48.589527 env[1446]: time="2024-07-02T02:31:48.589491831Z" level=info msg="CreateContainer within sandbox \"64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 02:31:48.598369 env[1446]: time="2024-07-02T02:31:48.598337511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-c92d6bc2c6,Uid:9e4728c1d3c3744f633a965b67c79a05,Namespace:kube-system,Attempt:0,} returns sandbox id \"279c9e4e47292b2689598906a05d1b890b27c9b8fcddb92eba4d622fe142df1e\"" Jul 2 02:31:48.601497 env[1446]: time="2024-07-02T02:31:48.601465177Z" level=info msg="CreateContainer within sandbox \"279c9e4e47292b2689598906a05d1b890b27c9b8fcddb92eba4d622fe142df1e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 02:31:48.603797 env[1446]: time="2024-07-02T02:31:48.603763647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-c92d6bc2c6,Uid:e4ff5f0aded40d950ab46ad5c999f8be,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594\"" Jul 2 02:31:48.608322 env[1446]: time="2024-07-02T02:31:48.608288867Z" level=info msg="CreateContainer within sandbox \"3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 02:31:48.644620 kubelet[2167]: E0702 02:31:48.644554 2167 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Jul 2 02:31:48.651951 env[1446]: time="2024-07-02T02:31:48.651912993Z" level=info msg="CreateContainer within sandbox \"279c9e4e47292b2689598906a05d1b890b27c9b8fcddb92eba4d622fe142df1e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec35a2df589e47978f7753076a99f8c11fa13a472b913f6e6d645b663ee5c062\"" Jul 2 02:31:48.652632 env[1446]: time="2024-07-02T02:31:48.652610150Z" level=info msg="StartContainer for \"ec35a2df589e47978f7753076a99f8c11fa13a472b913f6e6d645b663ee5c062\"" Jul 2 02:31:48.666534 systemd[1]: Started cri-containerd-ec35a2df589e47978f7753076a99f8c11fa13a472b913f6e6d645b663ee5c062.scope. Jul 2 02:31:48.671604 env[1446]: time="2024-07-02T02:31:48.671565826Z" level=info msg="CreateContainer within sandbox \"64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a\"" Jul 2 02:31:48.672118 env[1446]: time="2024-07-02T02:31:48.672082784Z" level=info msg="StartContainer for \"cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a\"" Jul 2 02:31:48.677507 env[1446]: time="2024-07-02T02:31:48.677476440Z" level=info msg="CreateContainer within sandbox \"3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b\"" Jul 2 02:31:48.678502 env[1446]: time="2024-07-02T02:31:48.678475515Z" level=info msg="StartContainer for \"379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b\"" Jul 2 02:31:48.701130 systemd[1]: Started cri-containerd-cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a.scope. Jul 2 02:31:48.710416 kubelet[2167]: W0702 02:31:48.710363 2167 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.710702 kubelet[2167]: E0702 02:31:48.710674 2167 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Jul 2 02:31:48.717532 systemd[1]: Started cri-containerd-379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b.scope. Jul 2 02:31:48.728163 env[1446]: time="2024-07-02T02:31:48.728112095Z" level=info msg="StartContainer for \"ec35a2df589e47978f7753076a99f8c11fa13a472b913f6e6d645b663ee5c062\" returns successfully" Jul 2 02:31:48.749183 kubelet[2167]: I0702 02:31:48.749151 2167 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:48.749504 kubelet[2167]: E0702 02:31:48.749472 2167 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:48.760072 env[1446]: time="2024-07-02T02:31:48.760027313Z" level=info msg="StartContainer for \"cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a\" returns successfully" Jul 2 02:31:48.798801 env[1446]: time="2024-07-02T02:31:48.798755421Z" level=info msg="StartContainer for \"379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b\" returns successfully" Jul 2 02:31:50.350888 kubelet[2167]: I0702 02:31:50.350857 2167 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:50.641817 kubelet[2167]: E0702 02:31:50.641770 2167 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-c92d6bc2c6\" not found" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:50.710591 kubelet[2167]: I0702 02:31:50.710550 2167 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:51.215812 kubelet[2167]: I0702 02:31:51.215606 2167 apiserver.go:52] "Watching apiserver" Jul 2 02:31:51.239642 kubelet[2167]: I0702 02:31:51.239607 2167 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 02:31:53.247288 kubelet[2167]: W0702 02:31:53.247263 2167 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 02:31:53.360522 systemd[1]: Reloading. Jul 2 02:31:53.441295 /usr/lib/systemd/system-generators/torcx-generator[2453]: time="2024-07-02T02:31:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 02:31:53.441667 /usr/lib/systemd/system-generators/torcx-generator[2453]: time="2024-07-02T02:31:53Z" level=info msg="torcx already run" Jul 2 02:31:53.518438 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 02:31:53.518455 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 02:31:53.534916 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 02:31:53.651514 kubelet[2167]: I0702 02:31:53.651482 2167 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 02:31:53.653978 systemd[1]: Stopping kubelet.service... Jul 2 02:31:53.669657 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 02:31:53.669932 systemd[1]: Stopped kubelet.service. Jul 2 02:31:53.672168 systemd[1]: Starting kubelet.service... Jul 2 02:31:53.750054 systemd[1]: Started kubelet.service. Jul 2 02:31:54.104830 kubelet[2517]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 02:31:54.104830 kubelet[2517]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 02:31:54.104830 kubelet[2517]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 02:31:54.104830 kubelet[2517]: I0702 02:31:53.818464 2517 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 02:31:54.104830 kubelet[2517]: I0702 02:31:53.822801 2517 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 02:31:54.104830 kubelet[2517]: I0702 02:31:53.822822 2517 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 02:31:54.104830 kubelet[2517]: I0702 02:31:53.823008 2517 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 02:31:54.106847 kubelet[2517]: I0702 02:31:54.106817 2517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 02:31:54.108965 kubelet[2517]: I0702 02:31:54.108944 2517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 02:31:54.119109 kubelet[2517]: I0702 02:31:54.119082 2517 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 02:31:54.119332 kubelet[2517]: I0702 02:31:54.119304 2517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 02:31:54.119502 kubelet[2517]: I0702 02:31:54.119483 2517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 02:31:54.119588 kubelet[2517]: I0702 02:31:54.119507 2517 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 02:31:54.119588 kubelet[2517]: I0702 02:31:54.119516 2517 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 02:31:54.119588 kubelet[2517]: I0702 02:31:54.119544 2517 state_mem.go:36] "Initialized new in-memory state store" Jul 2 02:31:54.119693 kubelet[2517]: I0702 02:31:54.119646 2517 kubelet.go:396] "Attempting to sync node with API server" Jul 2 02:31:54.119693 kubelet[2517]: I0702 02:31:54.119659 2517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 02:31:54.119693 kubelet[2517]: I0702 02:31:54.119680 2517 kubelet.go:312] "Adding apiserver pod source" Jul 2 02:31:54.119693 kubelet[2517]: I0702 02:31:54.119693 2517 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 02:31:54.120563 kubelet[2517]: I0702 02:31:54.120533 2517 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 02:31:54.120736 kubelet[2517]: I0702 02:31:54.120715 2517 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 02:31:54.121076 kubelet[2517]: I0702 02:31:54.121053 2517 server.go:1256] "Started kubelet" Jul 2 02:31:54.124376 kubelet[2517]: I0702 02:31:54.123131 2517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 02:31:54.135225 kubelet[2517]: I0702 02:31:54.135208 2517 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 02:31:54.137116 kubelet[2517]: I0702 02:31:54.137098 2517 server.go:461] "Adding debug handlers to kubelet server" Jul 2 02:31:54.138112 kubelet[2517]: I0702 02:31:54.138097 2517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 02:31:54.138356 kubelet[2517]: I0702 02:31:54.138342 2517 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 02:31:54.139615 kubelet[2517]: I0702 02:31:54.139596 2517 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 02:31:54.144344 kubelet[2517]: I0702 02:31:54.144299 2517 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 02:31:54.144645 kubelet[2517]: I0702 02:31:54.144632 2517 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 02:31:54.147792 kubelet[2517]: I0702 02:31:54.147774 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 02:31:54.148639 kubelet[2517]: I0702 02:31:54.148626 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 02:31:54.148736 kubelet[2517]: I0702 02:31:54.148725 2517 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 02:31:54.148798 kubelet[2517]: I0702 02:31:54.148788 2517 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 02:31:54.148887 kubelet[2517]: E0702 02:31:54.148877 2517 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 02:31:54.160093 kubelet[2517]: I0702 02:31:54.160066 2517 factory.go:221] Registration of the systemd container factory successfully Jul 2 02:31:54.160161 kubelet[2517]: I0702 02:31:54.160140 2517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 02:31:54.164229 kubelet[2517]: E0702 02:31:54.163886 2517 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 02:31:54.164229 kubelet[2517]: I0702 02:31:54.164063 2517 factory.go:221] Registration of the containerd container factory successfully Jul 2 02:31:54.202027 kubelet[2517]: I0702 02:31:54.201998 2517 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 02:31:54.202027 kubelet[2517]: I0702 02:31:54.202021 2517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 02:31:54.202190 kubelet[2517]: I0702 02:31:54.202039 2517 state_mem.go:36] "Initialized new in-memory state store" Jul 2 02:31:54.202190 kubelet[2517]: I0702 02:31:54.202163 2517 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 02:31:54.202190 kubelet[2517]: I0702 02:31:54.202182 2517 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 02:31:54.202190 kubelet[2517]: I0702 02:31:54.202189 2517 policy_none.go:49] "None policy: Start" Jul 2 02:31:54.202796 kubelet[2517]: I0702 02:31:54.202774 2517 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 02:31:54.202796 kubelet[2517]: I0702 02:31:54.202799 2517 state_mem.go:35] "Initializing new in-memory state store" Jul 2 02:31:54.202994 kubelet[2517]: I0702 02:31:54.202978 2517 state_mem.go:75] "Updated machine memory state" Jul 2 02:31:54.206633 kubelet[2517]: I0702 02:31:54.206614 2517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 02:31:54.206832 kubelet[2517]: I0702 02:31:54.206810 2517 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 02:31:54.245169 kubelet[2517]: I0702 02:31:54.245142 2517 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.249028 kubelet[2517]: I0702 02:31:54.249008 2517 topology_manager.go:215] "Topology Admit Handler" podUID="9e4728c1d3c3744f633a965b67c79a05" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.249251 kubelet[2517]: I0702 02:31:54.249237 2517 topology_manager.go:215] "Topology Admit Handler" podUID="663c486df4f058777cbd27507371fb41" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.249771 kubelet[2517]: I0702 02:31:54.249746 2517 topology_manager.go:215] "Topology Admit Handler" podUID="e4ff5f0aded40d950ab46ad5c999f8be" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.255498 kubelet[2517]: W0702 02:31:54.255480 2517 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 02:31:54.255732 kubelet[2517]: W0702 02:31:54.255717 2517 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 02:31:54.255864 kubelet[2517]: E0702 02:31:54.255850 2517 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.256068 kubelet[2517]: W0702 02:31:54.256054 2517 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 02:31:54.258793 kubelet[2517]: I0702 02:31:54.258771 2517 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.258942 kubelet[2517]: I0702 02:31:54.258931 2517 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.367623 sudo[2547]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 02:31:54.369030 sudo[2547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 02:31:54.445735 kubelet[2517]: I0702 02:31:54.445703 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e4728c1d3c3744f633a965b67c79a05-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"9e4728c1d3c3744f633a965b67c79a05\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.445985 kubelet[2517]: I0702 02:31:54.445972 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e4728c1d3c3744f633a965b67c79a05-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"9e4728c1d3c3744f633a965b67c79a05\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446089 kubelet[2517]: I0702 02:31:54.446079 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446186 kubelet[2517]: I0702 02:31:54.446177 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446279 kubelet[2517]: I0702 02:31:54.446270 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446453 kubelet[2517]: I0702 02:31:54.446439 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4ff5f0aded40d950ab46ad5c999f8be-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"e4ff5f0aded40d950ab46ad5c999f8be\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446555 kubelet[2517]: I0702 02:31:54.446545 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e4728c1d3c3744f633a965b67c79a05-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"9e4728c1d3c3744f633a965b67c79a05\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446639 kubelet[2517]: I0702 02:31:54.446630 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.446729 kubelet[2517]: I0702 02:31:54.446720 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/663c486df4f058777cbd27507371fb41-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6\" (UID: \"663c486df4f058777cbd27507371fb41\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:54.893026 sudo[2547]: pam_unix(sudo:session): session closed for user root Jul 2 02:31:55.120717 kubelet[2517]: I0702 02:31:55.120680 2517 apiserver.go:52] "Watching apiserver" Jul 2 02:31:55.145385 kubelet[2517]: I0702 02:31:55.145282 2517 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 02:31:55.158904 kubelet[2517]: I0702 02:31:55.158739 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" podStartSLOduration=2.158673439 podStartE2EDuration="2.158673439s" podCreationTimestamp="2024-07-02 02:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:31:55.157759762 +0000 UTC m=+1.400289932" watchObservedRunningTime="2024-07-02 02:31:55.158673439 +0000 UTC m=+1.401203609" Jul 2 02:31:55.172509 kubelet[2517]: I0702 02:31:55.172464 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-c92d6bc2c6" podStartSLOduration=1.172430065 podStartE2EDuration="1.172430065s" podCreationTimestamp="2024-07-02 02:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:31:55.164766055 +0000 UTC m=+1.407296225" watchObservedRunningTime="2024-07-02 02:31:55.172430065 +0000 UTC m=+1.414960235" Jul 2 02:31:55.181745 kubelet[2517]: I0702 02:31:55.181716 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" podStartSLOduration=1.181681789 podStartE2EDuration="1.181681789s" podCreationTimestamp="2024-07-02 02:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:31:55.173216262 +0000 UTC m=+1.415746432" watchObservedRunningTime="2024-07-02 02:31:55.181681789 +0000 UTC m=+1.424211999" Jul 2 02:31:55.197199 kubelet[2517]: W0702 02:31:55.197172 2517 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 02:31:55.197300 kubelet[2517]: E0702 02:31:55.197260 2517 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-c92d6bc2c6\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-c92d6bc2c6" Jul 2 02:31:57.728296 sudo[1821]: pam_unix(sudo:session): session closed for user root Jul 2 02:31:57.810517 sshd[1818]: pam_unix(sshd:session): session closed for user core Jul 2 02:31:57.813237 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:59924.service: Deactivated successfully. Jul 2 02:31:57.813966 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 02:31:57.814109 systemd[1]: session-7.scope: Consumed 9.390s CPU time. Jul 2 02:31:57.814776 systemd-logind[1435]: Session 7 logged out. Waiting for processes to exit. Jul 2 02:31:57.815582 systemd-logind[1435]: Removed session 7. Jul 2 02:32:08.898028 kubelet[2517]: I0702 02:32:08.898004 2517 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 02:32:08.898989 env[1446]: time="2024-07-02T02:32:08.898930673Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 02:32:08.899498 kubelet[2517]: I0702 02:32:08.899472 2517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 02:32:08.927602 kubelet[2517]: I0702 02:32:08.927567 2517 topology_manager.go:215] "Topology Admit Handler" podUID="709d54a0-363a-4a60-89e7-314339f722a0" podNamespace="kube-system" podName="cilium-operator-5cc964979-k2vxz" Jul 2 02:32:08.932297 systemd[1]: Created slice kubepods-besteffort-pod709d54a0_363a_4a60_89e7_314339f722a0.slice. Jul 2 02:32:08.941378 kubelet[2517]: W0702 02:32:08.941347 2517 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.5-a-c92d6bc2c6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-c92d6bc2c6' and this object Jul 2 02:32:08.941378 kubelet[2517]: E0702 02:32:08.941382 2517 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.5-a-c92d6bc2c6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-c92d6bc2c6' and this object Jul 2 02:32:08.941550 kubelet[2517]: W0702 02:32:08.941527 2517 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.5-a-c92d6bc2c6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-c92d6bc2c6' and this object Jul 2 02:32:08.941589 kubelet[2517]: E0702 02:32:08.941550 2517 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.5-a-c92d6bc2c6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-c92d6bc2c6' and this object Jul 2 02:32:09.015705 kubelet[2517]: I0702 02:32:09.015659 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/709d54a0-363a-4a60-89e7-314339f722a0-cilium-config-path\") pod \"cilium-operator-5cc964979-k2vxz\" (UID: \"709d54a0-363a-4a60-89e7-314339f722a0\") " pod="kube-system/cilium-operator-5cc964979-k2vxz" Jul 2 02:32:09.015705 kubelet[2517]: I0702 02:32:09.015709 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99sg\" (UniqueName: \"kubernetes.io/projected/709d54a0-363a-4a60-89e7-314339f722a0-kube-api-access-l99sg\") pod \"cilium-operator-5cc964979-k2vxz\" (UID: \"709d54a0-363a-4a60-89e7-314339f722a0\") " pod="kube-system/cilium-operator-5cc964979-k2vxz" Jul 2 02:32:09.058873 kubelet[2517]: I0702 02:32:09.058831 2517 topology_manager.go:215] "Topology Admit Handler" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" podNamespace="kube-system" podName="cilium-rkk6m" Jul 2 02:32:09.063756 systemd[1]: Created slice kubepods-burstable-pod91ab30d0_fead_4cdd_9688_ace1474008e2.slice. Jul 2 02:32:09.067932 kubelet[2517]: I0702 02:32:09.067897 2517 topology_manager.go:215] "Topology Admit Handler" podUID="7ef3038f-0770-422f-af34-b37b576949ba" podNamespace="kube-system" podName="kube-proxy-czv9d" Jul 2 02:32:09.071851 systemd[1]: Created slice kubepods-besteffort-pod7ef3038f_0770_422f_af34_b37b576949ba.slice. Jul 2 02:32:09.115985 kubelet[2517]: I0702 02:32:09.115950 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-run\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116147 kubelet[2517]: I0702 02:32:09.115997 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cni-path\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116147 kubelet[2517]: I0702 02:32:09.116034 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-lib-modules\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116147 kubelet[2517]: I0702 02:32:09.116055 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-657ws\" (UniqueName: \"kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-kube-api-access-657ws\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116147 kubelet[2517]: I0702 02:32:09.116077 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef3038f-0770-422f-af34-b37b576949ba-xtables-lock\") pod \"kube-proxy-czv9d\" (UID: \"7ef3038f-0770-422f-af34-b37b576949ba\") " pod="kube-system/kube-proxy-czv9d" Jul 2 02:32:09.116147 kubelet[2517]: I0702 02:32:09.116095 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-xtables-lock\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116147 kubelet[2517]: I0702 02:32:09.116116 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-net\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116299 kubelet[2517]: I0702 02:32:09.116134 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-hostproc\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116299 kubelet[2517]: I0702 02:32:09.116154 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ef3038f-0770-422f-af34-b37b576949ba-kube-proxy\") pod \"kube-proxy-czv9d\" (UID: \"7ef3038f-0770-422f-af34-b37b576949ba\") " pod="kube-system/kube-proxy-czv9d" Jul 2 02:32:09.116299 kubelet[2517]: I0702 02:32:09.116177 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-etc-cni-netd\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116299 kubelet[2517]: I0702 02:32:09.116195 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91ab30d0-fead-4cdd-9688-ace1474008e2-clustermesh-secrets\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116299 kubelet[2517]: I0702 02:32:09.116218 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-kernel\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116299 kubelet[2517]: I0702 02:32:09.116247 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-bpf-maps\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116458 kubelet[2517]: I0702 02:32:09.116264 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef3038f-0770-422f-af34-b37b576949ba-lib-modules\") pod \"kube-proxy-czv9d\" (UID: \"7ef3038f-0770-422f-af34-b37b576949ba\") " pod="kube-system/kube-proxy-czv9d" Jul 2 02:32:09.116458 kubelet[2517]: I0702 02:32:09.116282 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-hubble-tls\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116458 kubelet[2517]: I0702 02:32:09.116303 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-cgroup\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116458 kubelet[2517]: I0702 02:32:09.116346 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-config-path\") pod \"cilium-rkk6m\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " pod="kube-system/cilium-rkk6m" Jul 2 02:32:09.116458 kubelet[2517]: I0702 02:32:09.116368 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hg5\" (UniqueName: \"kubernetes.io/projected/7ef3038f-0770-422f-af34-b37b576949ba-kube-api-access-75hg5\") pod \"kube-proxy-czv9d\" (UID: \"7ef3038f-0770-422f-af34-b37b576949ba\") " pod="kube-system/kube-proxy-czv9d" Jul 2 02:32:10.125638 kubelet[2517]: E0702 02:32:10.125588 2517 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 02:32:10.125638 kubelet[2517]: E0702 02:32:10.125638 2517 projected.go:200] Error preparing data for projected volume kube-api-access-l99sg for pod kube-system/cilium-operator-5cc964979-k2vxz: failed to sync configmap cache: timed out waiting for the condition Jul 2 02:32:10.127566 kubelet[2517]: E0702 02:32:10.125742 2517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/709d54a0-363a-4a60-89e7-314339f722a0-kube-api-access-l99sg podName:709d54a0-363a-4a60-89e7-314339f722a0 nodeName:}" failed. No retries permitted until 2024-07-02 02:32:10.62571337 +0000 UTC m=+16.868243580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l99sg" (UniqueName: "kubernetes.io/projected/709d54a0-363a-4a60-89e7-314339f722a0-kube-api-access-l99sg") pod "cilium-operator-5cc964979-k2vxz" (UID: "709d54a0-363a-4a60-89e7-314339f722a0") : failed to sync configmap cache: timed out waiting for the condition Jul 2 02:32:10.267003 env[1446]: time="2024-07-02T02:32:10.266958661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkk6m,Uid:91ab30d0-fead-4cdd-9688-ace1474008e2,Namespace:kube-system,Attempt:0,}" Jul 2 02:32:10.274786 env[1446]: time="2024-07-02T02:32:10.274735077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czv9d,Uid:7ef3038f-0770-422f-af34-b37b576949ba,Namespace:kube-system,Attempt:0,}" Jul 2 02:32:10.324980 env[1446]: time="2024-07-02T02:32:10.322477132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:32:10.324980 env[1446]: time="2024-07-02T02:32:10.322510292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:32:10.324980 env[1446]: time="2024-07-02T02:32:10.322519972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:32:10.324980 env[1446]: time="2024-07-02T02:32:10.322611211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5 pid=2597 runtime=io.containerd.runc.v2 Jul 2 02:32:10.325818 env[1446]: time="2024-07-02T02:32:10.325760402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:32:10.325889 env[1446]: time="2024-07-02T02:32:10.325826842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:32:10.325889 env[1446]: time="2024-07-02T02:32:10.325852802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:32:10.326046 env[1446]: time="2024-07-02T02:32:10.325983241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06aa693a7f7751a6c73258b71dacf33532d46e4235e5aea527fad360ecb921c0 pid=2611 runtime=io.containerd.runc.v2 Jul 2 02:32:10.343886 systemd[1]: Started cri-containerd-54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5.scope. Jul 2 02:32:10.350631 systemd[1]: Started cri-containerd-06aa693a7f7751a6c73258b71dacf33532d46e4235e5aea527fad360ecb921c0.scope. Jul 2 02:32:10.372589 env[1446]: time="2024-07-02T02:32:10.372534620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkk6m,Uid:91ab30d0-fead-4cdd-9688-ace1474008e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\"" Jul 2 02:32:10.374641 env[1446]: time="2024-07-02T02:32:10.374617133Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 02:32:10.392839 env[1446]: time="2024-07-02T02:32:10.392796558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czv9d,Uid:7ef3038f-0770-422f-af34-b37b576949ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"06aa693a7f7751a6c73258b71dacf33532d46e4235e5aea527fad360ecb921c0\"" Jul 2 02:32:10.395226 env[1446]: time="2024-07-02T02:32:10.395195271Z" level=info msg="CreateContainer within sandbox \"06aa693a7f7751a6c73258b71dacf33532d46e4235e5aea527fad360ecb921c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 02:32:10.431596 env[1446]: time="2024-07-02T02:32:10.431525040Z" level=info msg="CreateContainer within sandbox \"06aa693a7f7751a6c73258b71dacf33532d46e4235e5aea527fad360ecb921c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"142b9123fd0eb220c5923697d5229336d7193569ae8d6c106390aa4d98d8827e\"" Jul 2 02:32:10.433794 env[1446]: time="2024-07-02T02:32:10.432510917Z" level=info msg="StartContainer for \"142b9123fd0eb220c5923697d5229336d7193569ae8d6c106390aa4d98d8827e\"" Jul 2 02:32:10.448942 systemd[1]: Started cri-containerd-142b9123fd0eb220c5923697d5229336d7193569ae8d6c106390aa4d98d8827e.scope. Jul 2 02:32:10.477794 env[1446]: time="2024-07-02T02:32:10.477729740Z" level=info msg="StartContainer for \"142b9123fd0eb220c5923697d5229336d7193569ae8d6c106390aa4d98d8827e\" returns successfully" Jul 2 02:32:10.742004 env[1446]: time="2024-07-02T02:32:10.741896337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-k2vxz,Uid:709d54a0-363a-4a60-89e7-314339f722a0,Namespace:kube-system,Attempt:0,}" Jul 2 02:32:10.775875 env[1446]: time="2024-07-02T02:32:10.775696754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:32:10.775875 env[1446]: time="2024-07-02T02:32:10.775731914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:32:10.775875 env[1446]: time="2024-07-02T02:32:10.775741594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:32:10.776153 env[1446]: time="2024-07-02T02:32:10.776101433Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27 pid=2791 runtime=io.containerd.runc.v2 Jul 2 02:32:10.786712 systemd[1]: Started cri-containerd-d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27.scope. Jul 2 02:32:10.815140 env[1446]: time="2024-07-02T02:32:10.815104955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-k2vxz,Uid:709d54a0-363a-4a60-89e7-314339f722a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\"" Jul 2 02:32:11.222356 kubelet[2517]: I0702 02:32:11.222228 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-czv9d" podStartSLOduration=2.222182767 podStartE2EDuration="2.222182767s" podCreationTimestamp="2024-07-02 02:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:32:11.221758128 +0000 UTC m=+17.464288338" watchObservedRunningTime="2024-07-02 02:32:11.222182767 +0000 UTC m=+17.464712937" Jul 2 02:32:15.283931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361065428.mount: Deactivated successfully. Jul 2 02:32:17.413299 env[1446]: time="2024-07-02T02:32:17.413248318Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:32:17.420615 env[1446]: time="2024-07-02T02:32:17.420578058Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:32:17.425472 env[1446]: time="2024-07-02T02:32:17.425432964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:32:17.426126 env[1446]: time="2024-07-02T02:32:17.426090123Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 02:32:17.429504 env[1446]: time="2024-07-02T02:32:17.429467913Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 02:32:17.430914 env[1446]: time="2024-07-02T02:32:17.430887789Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 02:32:17.456182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1240665209.mount: Deactivated successfully. Jul 2 02:32:17.471805 env[1446]: time="2024-07-02T02:32:17.471762356Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\"" Jul 2 02:32:17.474182 env[1446]: time="2024-07-02T02:32:17.474052350Z" level=info msg="StartContainer for \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\"" Jul 2 02:32:17.492366 systemd[1]: Started cri-containerd-0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405.scope. Jul 2 02:32:17.524479 env[1446]: time="2024-07-02T02:32:17.524430131Z" level=info msg="StartContainer for \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\" returns successfully" Jul 2 02:32:17.531374 systemd[1]: cri-containerd-0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405.scope: Deactivated successfully. Jul 2 02:32:18.454272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405-rootfs.mount: Deactivated successfully. Jul 2 02:32:19.277918 env[1446]: time="2024-07-02T02:32:19.277865300Z" level=info msg="shim disconnected" id=0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405 Jul 2 02:32:19.278355 env[1446]: time="2024-07-02T02:32:19.278302659Z" level=warning msg="cleaning up after shim disconnected" id=0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405 namespace=k8s.io Jul 2 02:32:19.278438 env[1446]: time="2024-07-02T02:32:19.278424698Z" level=info msg="cleaning up dead shim" Jul 2 02:32:19.285158 env[1446]: time="2024-07-02T02:32:19.285117120Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:32:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n" Jul 2 02:32:20.233129 env[1446]: time="2024-07-02T02:32:20.233087453Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 02:32:20.267473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522410093.mount: Deactivated successfully. Jul 2 02:32:20.277229 env[1446]: time="2024-07-02T02:32:20.277182136Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\"" Jul 2 02:32:20.279143 env[1446]: time="2024-07-02T02:32:20.277931934Z" level=info msg="StartContainer for \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\"" Jul 2 02:32:20.306677 systemd[1]: Started cri-containerd-613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa.scope. Jul 2 02:32:20.344097 env[1446]: time="2024-07-02T02:32:20.344059678Z" level=info msg="StartContainer for \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\" returns successfully" Jul 2 02:32:20.348512 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 02:32:20.348798 systemd[1]: Stopped systemd-sysctl.service. Jul 2 02:32:20.349006 systemd[1]: Stopping systemd-sysctl.service... Jul 2 02:32:20.350580 systemd[1]: Starting systemd-sysctl.service... Jul 2 02:32:20.357812 systemd[1]: Finished systemd-sysctl.service. Jul 2 02:32:20.358712 systemd[1]: cri-containerd-613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa.scope: Deactivated successfully. Jul 2 02:32:20.396829 env[1446]: time="2024-07-02T02:32:20.396779298Z" level=info msg="shim disconnected" id=613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa Jul 2 02:32:20.396829 env[1446]: time="2024-07-02T02:32:20.396822658Z" level=warning msg="cleaning up after shim disconnected" id=613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa namespace=k8s.io Jul 2 02:32:20.396829 env[1446]: time="2024-07-02T02:32:20.396833497Z" level=info msg="cleaning up dead shim" Jul 2 02:32:20.404518 env[1446]: time="2024-07-02T02:32:20.404469637Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:32:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2985 runtime=io.containerd.runc.v2\n" Jul 2 02:32:20.951502 env[1446]: time="2024-07-02T02:32:20.951438781Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:32:20.956796 env[1446]: time="2024-07-02T02:32:20.956756087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:32:20.960368 env[1446]: time="2024-07-02T02:32:20.960331477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 02:32:20.960979 env[1446]: time="2024-07-02T02:32:20.960944156Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 02:32:20.964884 env[1446]: time="2024-07-02T02:32:20.964259627Z" level=info msg="CreateContainer within sandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 02:32:20.991722 env[1446]: time="2024-07-02T02:32:20.991680514Z" level=info msg="CreateContainer within sandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\"" Jul 2 02:32:20.994038 env[1446]: time="2024-07-02T02:32:20.993500349Z" level=info msg="StartContainer for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\"" Jul 2 02:32:21.007766 systemd[1]: Started cri-containerd-66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387.scope. Jul 2 02:32:21.038182 env[1446]: time="2024-07-02T02:32:21.038137071Z" level=info msg="StartContainer for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" returns successfully" Jul 2 02:32:21.236418 env[1446]: time="2024-07-02T02:32:21.236262030Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 02:32:21.264566 systemd[1]: run-containerd-runc-k8s.io-613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa-runc.IHsKgo.mount: Deactivated successfully. Jul 2 02:32:21.264826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa-rootfs.mount: Deactivated successfully. Jul 2 02:32:21.278870 env[1446]: time="2024-07-02T02:32:21.278820598Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\"" Jul 2 02:32:21.279545 env[1446]: time="2024-07-02T02:32:21.279511117Z" level=info msg="StartContainer for \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\"" Jul 2 02:32:21.283271 kubelet[2517]: I0702 02:32:21.283246 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-k2vxz" podStartSLOduration=3.137905183 podStartE2EDuration="13.283206667s" podCreationTimestamp="2024-07-02 02:32:08 +0000 UTC" firstStartedPulling="2024-07-02 02:32:10.81644771 +0000 UTC m=+17.058977880" lastFinishedPulling="2024-07-02 02:32:20.961749194 +0000 UTC m=+27.204279364" observedRunningTime="2024-07-02 02:32:21.258250652 +0000 UTC m=+27.500780822" watchObservedRunningTime="2024-07-02 02:32:21.283206667 +0000 UTC m=+27.525736837" Jul 2 02:32:21.308381 systemd[1]: run-containerd-runc-k8s.io-51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1-runc.TB3CU4.mount: Deactivated successfully. Jul 2 02:32:21.313076 systemd[1]: Started cri-containerd-51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1.scope. Jul 2 02:32:21.343997 systemd[1]: cri-containerd-51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1.scope: Deactivated successfully. Jul 2 02:32:21.348875 env[1446]: time="2024-07-02T02:32:21.348829454Z" level=info msg="StartContainer for \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\" returns successfully" Jul 2 02:32:21.368734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1-rootfs.mount: Deactivated successfully. Jul 2 02:32:21.720118 env[1446]: time="2024-07-02T02:32:21.720066718Z" level=info msg="shim disconnected" id=51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1 Jul 2 02:32:21.720118 env[1446]: time="2024-07-02T02:32:21.720113757Z" level=warning msg="cleaning up after shim disconnected" id=51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1 namespace=k8s.io Jul 2 02:32:21.720118 env[1446]: time="2024-07-02T02:32:21.720123197Z" level=info msg="cleaning up dead shim" Jul 2 02:32:21.726583 env[1446]: time="2024-07-02T02:32:21.726535661Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:32:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\n" Jul 2 02:32:22.240377 env[1446]: time="2024-07-02T02:32:22.240326316Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 02:32:22.268239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891069933.mount: Deactivated successfully. Jul 2 02:32:22.280804 env[1446]: time="2024-07-02T02:32:22.280759411Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\"" Jul 2 02:32:22.282156 env[1446]: time="2024-07-02T02:32:22.281342010Z" level=info msg="StartContainer for \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\"" Jul 2 02:32:22.297324 systemd[1]: Started cri-containerd-812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe.scope. Jul 2 02:32:22.327962 systemd[1]: cri-containerd-812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe.scope: Deactivated successfully. Jul 2 02:32:22.329991 env[1446]: time="2024-07-02T02:32:22.329846724Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ab30d0_fead_4cdd_9688_ace1474008e2.slice/cri-containerd-812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe.scope/memory.events\": no such file or directory" Jul 2 02:32:22.334655 env[1446]: time="2024-07-02T02:32:22.334611471Z" level=info msg="StartContainer for \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\" returns successfully" Jul 2 02:32:22.349000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe-rootfs.mount: Deactivated successfully. Jul 2 02:32:22.360249 env[1446]: time="2024-07-02T02:32:22.360182965Z" level=info msg="shim disconnected" id=812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe Jul 2 02:32:22.360446 env[1446]: time="2024-07-02T02:32:22.360251005Z" level=warning msg="cleaning up after shim disconnected" id=812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe namespace=k8s.io Jul 2 02:32:22.360446 env[1446]: time="2024-07-02T02:32:22.360262525Z" level=info msg="cleaning up dead shim" Jul 2 02:32:22.367236 env[1446]: time="2024-07-02T02:32:22.367185907Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:32:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3138 runtime=io.containerd.runc.v2\n" Jul 2 02:32:23.244917 env[1446]: time="2024-07-02T02:32:23.244877432Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 02:32:23.281241 env[1446]: time="2024-07-02T02:32:23.281186658Z" level=info msg="CreateContainer within sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\"" Jul 2 02:32:23.281944 env[1446]: time="2024-07-02T02:32:23.281924497Z" level=info msg="StartContainer for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\"" Jul 2 02:32:23.302515 systemd[1]: Started cri-containerd-c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85.scope. Jul 2 02:32:23.348745 env[1446]: time="2024-07-02T02:32:23.348687405Z" level=info msg="StartContainer for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" returns successfully" Jul 2 02:32:23.440337 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 02:32:23.500345 kubelet[2517]: I0702 02:32:23.497512 2517 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 02:32:23.522354 kubelet[2517]: I0702 02:32:23.522293 2517 topology_manager.go:215] "Topology Admit Handler" podUID="6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b" podNamespace="kube-system" podName="coredns-76f75df574-jq995" Jul 2 02:32:23.523065 kubelet[2517]: I0702 02:32:23.523044 2517 topology_manager.go:215] "Topology Admit Handler" podUID="85ad86f3-f330-48cb-8663-d81b0bcd8016" podNamespace="kube-system" podName="coredns-76f75df574-4srbs" Jul 2 02:32:23.528205 systemd[1]: Created slice kubepods-burstable-pod6ffa4fc0_3b2b_4b30_bb48_bc8cf103e72b.slice. Jul 2 02:32:23.532361 systemd[1]: Created slice kubepods-burstable-pod85ad86f3_f330_48cb_8663_d81b0bcd8016.slice. Jul 2 02:32:23.702656 kubelet[2517]: I0702 02:32:23.702607 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b-config-volume\") pod \"coredns-76f75df574-jq995\" (UID: \"6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b\") " pod="kube-system/coredns-76f75df574-jq995" Jul 2 02:32:23.702656 kubelet[2517]: I0702 02:32:23.702662 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdhk\" (UniqueName: \"kubernetes.io/projected/6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b-kube-api-access-xcdhk\") pod \"coredns-76f75df574-jq995\" (UID: \"6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b\") " pod="kube-system/coredns-76f75df574-jq995" Jul 2 02:32:23.702819 kubelet[2517]: I0702 02:32:23.702690 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85ad86f3-f330-48cb-8663-d81b0bcd8016-config-volume\") pod \"coredns-76f75df574-4srbs\" (UID: \"85ad86f3-f330-48cb-8663-d81b0bcd8016\") " pod="kube-system/coredns-76f75df574-4srbs" Jul 2 02:32:23.702819 kubelet[2517]: I0702 02:32:23.702723 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs48s\" (UniqueName: \"kubernetes.io/projected/85ad86f3-f330-48cb-8663-d81b0bcd8016-kube-api-access-xs48s\") pod \"coredns-76f75df574-4srbs\" (UID: \"85ad86f3-f330-48cb-8663-d81b0bcd8016\") " pod="kube-system/coredns-76f75df574-4srbs" Jul 2 02:32:23.831262 env[1446]: time="2024-07-02T02:32:23.831165645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jq995,Uid:6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b,Namespace:kube-system,Attempt:0,}" Jul 2 02:32:23.836604 env[1446]: time="2024-07-02T02:32:23.836407551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4srbs,Uid:85ad86f3-f330-48cb-8663-d81b0bcd8016,Namespace:kube-system,Attempt:0,}" Jul 2 02:32:23.919339 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 02:32:24.271976 systemd[1]: run-containerd-runc-k8s.io-c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85-runc.MXfsqy.mount: Deactivated successfully. Jul 2 02:32:24.274109 kubelet[2517]: I0702 02:32:24.273480 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rkk6m" podStartSLOduration=8.220325611 podStartE2EDuration="15.273443156s" podCreationTimestamp="2024-07-02 02:32:09 +0000 UTC" firstStartedPulling="2024-07-02 02:32:10.373987215 +0000 UTC m=+16.616517345" lastFinishedPulling="2024-07-02 02:32:17.42710472 +0000 UTC m=+23.669634890" observedRunningTime="2024-07-02 02:32:24.273242876 +0000 UTC m=+30.515773046" watchObservedRunningTime="2024-07-02 02:32:24.273443156 +0000 UTC m=+30.515973326" Jul 2 02:32:25.568066 systemd-networkd[1606]: cilium_host: Link UP Jul 2 02:32:25.574773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 02:32:25.574881 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 02:32:25.577105 systemd-networkd[1606]: cilium_net: Link UP Jul 2 02:32:25.577225 systemd-networkd[1606]: cilium_net: Gained carrier Jul 2 02:32:25.577413 systemd-networkd[1606]: cilium_host: Gained carrier Jul 2 02:32:25.782666 systemd-networkd[1606]: cilium_vxlan: Link UP Jul 2 02:32:25.782673 systemd-networkd[1606]: cilium_vxlan: Gained carrier Jul 2 02:32:25.978466 systemd-networkd[1606]: cilium_net: Gained IPv6LL Jul 2 02:32:26.096338 kernel: NET: Registered PF_ALG protocol family Jul 2 02:32:26.114454 systemd-networkd[1606]: cilium_host: Gained IPv6LL Jul 2 02:32:26.943460 systemd-networkd[1606]: lxc_health: Link UP Jul 2 02:32:26.959570 systemd-networkd[1606]: lxc_health: Gained carrier Jul 2 02:32:26.961237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 02:32:27.414797 systemd-networkd[1606]: lxc860d99e7ac97: Link UP Jul 2 02:32:27.415846 systemd-networkd[1606]: lxc32048e696d46: Link UP Jul 2 02:32:27.423334 kernel: eth0: renamed from tmpc6110 Jul 2 02:32:27.431338 kernel: eth0: renamed from tmp21d7f Jul 2 02:32:27.443657 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc32048e696d46: link becomes ready Jul 2 02:32:27.443502 systemd-networkd[1606]: lxc32048e696d46: Gained carrier Jul 2 02:32:27.455413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc860d99e7ac97: link becomes ready Jul 2 02:32:27.455145 systemd-networkd[1606]: lxc860d99e7ac97: Gained carrier Jul 2 02:32:27.522493 systemd-networkd[1606]: cilium_vxlan: Gained IPv6LL Jul 2 02:32:28.546420 systemd-networkd[1606]: lxc_health: Gained IPv6LL Jul 2 02:32:28.546676 systemd-networkd[1606]: lxc860d99e7ac97: Gained IPv6LL Jul 2 02:32:29.315418 systemd-networkd[1606]: lxc32048e696d46: Gained IPv6LL Jul 2 02:32:30.978406 env[1446]: time="2024-07-02T02:32:30.976245430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:32:30.978406 env[1446]: time="2024-07-02T02:32:30.976297710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:32:30.978406 env[1446]: time="2024-07-02T02:32:30.976353710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:32:30.978406 env[1446]: time="2024-07-02T02:32:30.976463630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21d7fade8d5664553d6e754d07ef9b7bdf6fc7c99bf31dae01bbaa955c7cdcc0 pid=3695 runtime=io.containerd.runc.v2 Jul 2 02:32:30.995200 env[1446]: time="2024-07-02T02:32:30.994531427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:32:30.995200 env[1446]: time="2024-07-02T02:32:30.994585946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:32:30.995200 env[1446]: time="2024-07-02T02:32:30.994596466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:32:30.995634 env[1446]: time="2024-07-02T02:32:30.995498944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c611024bcd200b880de7f7035417d83b54d5ef3d2e0fbe04cdf073d4a416cb3e pid=3717 runtime=io.containerd.runc.v2 Jul 2 02:32:31.002278 systemd[1]: run-containerd-runc-k8s.io-21d7fade8d5664553d6e754d07ef9b7bdf6fc7c99bf31dae01bbaa955c7cdcc0-runc.UZe7G0.mount: Deactivated successfully. Jul 2 02:32:31.014702 systemd[1]: Started cri-containerd-21d7fade8d5664553d6e754d07ef9b7bdf6fc7c99bf31dae01bbaa955c7cdcc0.scope. Jul 2 02:32:31.027716 systemd[1]: Started cri-containerd-c611024bcd200b880de7f7035417d83b54d5ef3d2e0fbe04cdf073d4a416cb3e.scope. Jul 2 02:32:31.070541 env[1446]: time="2024-07-02T02:32:31.070488967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jq995,Uid:6ffa4fc0-3b2b-4b30-bb48-bc8cf103e72b,Namespace:kube-system,Attempt:0,} returns sandbox id \"21d7fade8d5664553d6e754d07ef9b7bdf6fc7c99bf31dae01bbaa955c7cdcc0\"" Jul 2 02:32:31.073663 env[1446]: time="2024-07-02T02:32:31.073628759Z" level=info msg="CreateContainer within sandbox \"21d7fade8d5664553d6e754d07ef9b7bdf6fc7c99bf31dae01bbaa955c7cdcc0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 02:32:31.103004 env[1446]: time="2024-07-02T02:32:31.102954210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4srbs,Uid:85ad86f3-f330-48cb-8663-d81b0bcd8016,Namespace:kube-system,Attempt:0,} returns sandbox id \"c611024bcd200b880de7f7035417d83b54d5ef3d2e0fbe04cdf073d4a416cb3e\"" Jul 2 02:32:31.105869 env[1446]: time="2024-07-02T02:32:31.105836883Z" level=info msg="CreateContainer within sandbox \"c611024bcd200b880de7f7035417d83b54d5ef3d2e0fbe04cdf073d4a416cb3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 02:32:31.112554 env[1446]: time="2024-07-02T02:32:31.112512187Z" level=info msg="CreateContainer within sandbox \"21d7fade8d5664553d6e754d07ef9b7bdf6fc7c99bf31dae01bbaa955c7cdcc0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"973ba520a9d86872f55a557d88aa580affa3592dad0b7c724dd54110538f119e\"" Jul 2 02:32:31.115912 env[1446]: time="2024-07-02T02:32:31.115878859Z" level=info msg="StartContainer for \"973ba520a9d86872f55a557d88aa580affa3592dad0b7c724dd54110538f119e\"" Jul 2 02:32:31.136046 systemd[1]: Started cri-containerd-973ba520a9d86872f55a557d88aa580affa3592dad0b7c724dd54110538f119e.scope. Jul 2 02:32:31.144087 env[1446]: time="2024-07-02T02:32:31.144036233Z" level=info msg="CreateContainer within sandbox \"c611024bcd200b880de7f7035417d83b54d5ef3d2e0fbe04cdf073d4a416cb3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4dd9f4f7ee27b1a1759cc7e0d329633ec1210492346d89ca4e042ceab9f140be\"" Jul 2 02:32:31.144633 env[1446]: time="2024-07-02T02:32:31.144596232Z" level=info msg="StartContainer for \"4dd9f4f7ee27b1a1759cc7e0d329633ec1210492346d89ca4e042ceab9f140be\"" Jul 2 02:32:31.168202 systemd[1]: Started cri-containerd-4dd9f4f7ee27b1a1759cc7e0d329633ec1210492346d89ca4e042ceab9f140be.scope. Jul 2 02:32:31.199473 env[1446]: time="2024-07-02T02:32:31.199414222Z" level=info msg="StartContainer for \"973ba520a9d86872f55a557d88aa580affa3592dad0b7c724dd54110538f119e\" returns successfully" Jul 2 02:32:31.220578 env[1446]: time="2024-07-02T02:32:31.220521972Z" level=info msg="StartContainer for \"4dd9f4f7ee27b1a1759cc7e0d329633ec1210492346d89ca4e042ceab9f140be\" returns successfully" Jul 2 02:32:31.299798 kubelet[2517]: I0702 02:32:31.299694 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jq995" podStartSLOduration=23.299652145 podStartE2EDuration="23.299652145s" podCreationTimestamp="2024-07-02 02:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:32:31.297885429 +0000 UTC m=+37.540415599" watchObservedRunningTime="2024-07-02 02:32:31.299652145 +0000 UTC m=+37.542182315" Jul 2 02:32:31.299798 kubelet[2517]: I0702 02:32:31.299774 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4srbs" podStartSLOduration=23.299757865 podStartE2EDuration="23.299757865s" podCreationTimestamp="2024-07-02 02:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:32:31.283578663 +0000 UTC m=+37.526108833" watchObservedRunningTime="2024-07-02 02:32:31.299757865 +0000 UTC m=+37.542288035" Jul 2 02:34:23.559082 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:46774.service. Jul 2 02:34:24.003796 sshd[3869]: Accepted publickey for core from 10.200.16.10 port 46774 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:24.005556 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:24.010424 systemd[1]: Started session-8.scope. Jul 2 02:34:24.011368 systemd-logind[1435]: New session 8 of user core. Jul 2 02:34:24.470556 sshd[3869]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:24.473465 systemd-logind[1435]: Session 8 logged out. Waiting for processes to exit. Jul 2 02:34:24.473629 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 02:34:24.474406 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:46774.service: Deactivated successfully. Jul 2 02:34:24.475647 systemd-logind[1435]: Removed session 8. Jul 2 02:34:29.551177 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:49756.service. Jul 2 02:34:29.995980 sshd[3882]: Accepted publickey for core from 10.200.16.10 port 49756 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:29.997459 sshd[3882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:30.001759 systemd[1]: Started session-9.scope. Jul 2 02:34:30.002059 systemd-logind[1435]: New session 9 of user core. Jul 2 02:34:30.387010 sshd[3882]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:30.390037 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:49756.service: Deactivated successfully. Jul 2 02:34:30.390745 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 02:34:30.391145 systemd-logind[1435]: Session 9 logged out. Waiting for processes to exit. Jul 2 02:34:30.391846 systemd-logind[1435]: Removed session 9. Jul 2 02:34:35.462148 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:49772.service. Jul 2 02:34:35.907494 sshd[3895]: Accepted publickey for core from 10.200.16.10 port 49772 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:35.909222 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:35.913632 systemd[1]: Started session-10.scope. Jul 2 02:34:35.913947 systemd-logind[1435]: New session 10 of user core. Jul 2 02:34:36.289348 sshd[3895]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:36.292270 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:49772.service: Deactivated successfully. Jul 2 02:34:36.293009 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 02:34:36.294180 systemd-logind[1435]: Session 10 logged out. Waiting for processes to exit. Jul 2 02:34:36.295025 systemd-logind[1435]: Removed session 10. Jul 2 02:34:41.359380 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:60816.service. Jul 2 02:34:41.778072 sshd[3911]: Accepted publickey for core from 10.200.16.10 port 60816 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:41.779786 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:41.784387 systemd[1]: Started session-11.scope. Jul 2 02:34:41.784893 systemd-logind[1435]: New session 11 of user core. Jul 2 02:34:42.148249 sshd[3911]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:42.151438 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:60816.service: Deactivated successfully. Jul 2 02:34:42.152140 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 02:34:42.153257 systemd-logind[1435]: Session 11 logged out. Waiting for processes to exit. Jul 2 02:34:42.154018 systemd-logind[1435]: Removed session 11. Jul 2 02:34:47.217852 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:60830.service. Jul 2 02:34:47.630932 sshd[3923]: Accepted publickey for core from 10.200.16.10 port 60830 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:47.632702 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:47.636929 systemd[1]: Started session-12.scope. Jul 2 02:34:47.638163 systemd-logind[1435]: New session 12 of user core. Jul 2 02:34:47.986597 sshd[3923]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:47.989810 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:60830.service: Deactivated successfully. Jul 2 02:34:47.990571 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 02:34:47.991141 systemd-logind[1435]: Session 12 logged out. Waiting for processes to exit. Jul 2 02:34:47.991934 systemd-logind[1435]: Removed session 12. Jul 2 02:34:48.057902 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:60844.service. Jul 2 02:34:48.476299 sshd[3936]: Accepted publickey for core from 10.200.16.10 port 60844 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:48.477936 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:48.482222 systemd[1]: Started session-13.scope. Jul 2 02:34:48.482701 systemd-logind[1435]: New session 13 of user core. Jul 2 02:34:48.872698 sshd[3936]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:48.875583 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:60844.service: Deactivated successfully. Jul 2 02:34:48.876304 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 02:34:48.876961 systemd-logind[1435]: Session 13 logged out. Waiting for processes to exit. Jul 2 02:34:48.877744 systemd-logind[1435]: Removed session 13. Jul 2 02:34:48.945852 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:44926.service. Jul 2 02:34:49.390433 sshd[3945]: Accepted publickey for core from 10.200.16.10 port 44926 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:49.393145 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:49.396875 systemd-logind[1435]: New session 14 of user core. Jul 2 02:34:49.397545 systemd[1]: Started session-14.scope. Jul 2 02:34:49.770608 sshd[3945]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:49.773107 systemd-logind[1435]: Session 14 logged out. Waiting for processes to exit. Jul 2 02:34:49.773397 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:44926.service: Deactivated successfully. Jul 2 02:34:49.774104 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 02:34:49.774815 systemd-logind[1435]: Removed session 14. Jul 2 02:34:54.846013 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:44938.service. Jul 2 02:34:55.300393 sshd[3960]: Accepted publickey for core from 10.200.16.10 port 44938 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:34:55.301786 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:34:55.307005 systemd[1]: Started session-15.scope. Jul 2 02:34:55.308433 systemd-logind[1435]: New session 15 of user core. Jul 2 02:34:55.698183 sshd[3960]: pam_unix(sshd:session): session closed for user core Jul 2 02:34:55.700733 systemd-logind[1435]: Session 15 logged out. Waiting for processes to exit. Jul 2 02:34:55.701408 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:44938.service: Deactivated successfully. Jul 2 02:34:55.702173 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 02:34:55.703436 systemd-logind[1435]: Removed session 15. Jul 2 02:35:00.768242 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:58326.service. Jul 2 02:35:01.181727 sshd[3975]: Accepted publickey for core from 10.200.16.10 port 58326 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:01.183478 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:01.188084 systemd[1]: Started session-16.scope. Jul 2 02:35:01.189362 systemd-logind[1435]: New session 16 of user core. Jul 2 02:35:01.548981 sshd[3975]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:01.552064 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:58326.service: Deactivated successfully. Jul 2 02:35:01.552823 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 02:35:01.553404 systemd-logind[1435]: Session 16 logged out. Waiting for processes to exit. Jul 2 02:35:01.554299 systemd-logind[1435]: Removed session 16. Jul 2 02:35:01.622951 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:58332.service. Jul 2 02:35:02.068479 sshd[3987]: Accepted publickey for core from 10.200.16.10 port 58332 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:02.070217 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:02.074601 systemd[1]: Started session-17.scope. Jul 2 02:35:02.075301 systemd-logind[1435]: New session 17 of user core. Jul 2 02:35:02.486504 sshd[3987]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:02.489156 systemd-logind[1435]: Session 17 logged out. Waiting for processes to exit. Jul 2 02:35:02.489353 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:58332.service: Deactivated successfully. Jul 2 02:35:02.490041 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 02:35:02.490734 systemd-logind[1435]: Removed session 17. Jul 2 02:35:02.555134 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:58336.service. Jul 2 02:35:02.969505 sshd[3996]: Accepted publickey for core from 10.200.16.10 port 58336 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:02.971188 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:02.975492 systemd[1]: Started session-18.scope. Jul 2 02:35:02.975656 systemd-logind[1435]: New session 18 of user core. Jul 2 02:35:04.657814 sshd[3996]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:04.660913 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:58336.service: Deactivated successfully. Jul 2 02:35:04.661666 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 02:35:04.662183 systemd-logind[1435]: Session 18 logged out. Waiting for processes to exit. Jul 2 02:35:04.663039 systemd-logind[1435]: Removed session 18. Jul 2 02:35:04.726653 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:58338.service. Jul 2 02:35:05.140364 sshd[4013]: Accepted publickey for core from 10.200.16.10 port 58338 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:05.141930 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:05.146613 systemd[1]: Started session-19.scope. Jul 2 02:35:05.147630 systemd-logind[1435]: New session 19 of user core. Jul 2 02:35:05.599041 sshd[4013]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:05.601810 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:58338.service: Deactivated successfully. Jul 2 02:35:05.602564 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 02:35:05.603081 systemd-logind[1435]: Session 19 logged out. Waiting for processes to exit. Jul 2 02:35:05.603800 systemd-logind[1435]: Removed session 19. Jul 2 02:35:05.672792 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:58354.service. Jul 2 02:35:06.118084 sshd[4023]: Accepted publickey for core from 10.200.16.10 port 58354 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:06.119442 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:06.123913 systemd[1]: Started session-20.scope. Jul 2 02:35:06.124227 systemd-logind[1435]: New session 20 of user core. Jul 2 02:35:06.494171 sshd[4023]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:06.497053 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:58354.service: Deactivated successfully. Jul 2 02:35:06.497795 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 02:35:06.498348 systemd-logind[1435]: Session 20 logged out. Waiting for processes to exit. Jul 2 02:35:06.499142 systemd-logind[1435]: Removed session 20. Jul 2 02:35:11.569091 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:44280.service. Jul 2 02:35:12.017584 sshd[4036]: Accepted publickey for core from 10.200.16.10 port 44280 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:12.019370 sshd[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:12.023672 systemd[1]: Started session-21.scope. Jul 2 02:35:12.023970 systemd-logind[1435]: New session 21 of user core. Jul 2 02:35:12.405938 sshd[4036]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:12.408331 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:44280.service: Deactivated successfully. Jul 2 02:35:12.409040 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 02:35:12.409565 systemd-logind[1435]: Session 21 logged out. Waiting for processes to exit. Jul 2 02:35:12.410227 systemd-logind[1435]: Removed session 21. Jul 2 02:35:17.481277 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:44282.service. Jul 2 02:35:17.929023 sshd[4050]: Accepted publickey for core from 10.200.16.10 port 44282 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:17.930371 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:17.934065 systemd-logind[1435]: New session 22 of user core. Jul 2 02:35:17.934767 systemd[1]: Started session-22.scope. Jul 2 02:35:18.316187 sshd[4050]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:18.318842 systemd-logind[1435]: Session 22 logged out. Waiting for processes to exit. Jul 2 02:35:18.318845 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 02:35:18.319511 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:44282.service: Deactivated successfully. Jul 2 02:35:18.320617 systemd-logind[1435]: Removed session 22. Jul 2 02:35:23.386224 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:57476.service. Jul 2 02:35:23.805191 sshd[4066]: Accepted publickey for core from 10.200.16.10 port 57476 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:23.806919 sshd[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:23.811249 systemd[1]: Started session-23.scope. Jul 2 02:35:23.811403 systemd-logind[1435]: New session 23 of user core. Jul 2 02:35:24.167547 sshd[4066]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:24.170010 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:57476.service: Deactivated successfully. Jul 2 02:35:24.170748 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 02:35:24.172136 systemd-logind[1435]: Session 23 logged out. Waiting for processes to exit. Jul 2 02:35:24.173134 systemd-logind[1435]: Removed session 23. Jul 2 02:35:29.240543 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:40134.service. Jul 2 02:35:29.685259 sshd[4078]: Accepted publickey for core from 10.200.16.10 port 40134 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:29.687052 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:29.691566 systemd[1]: Started session-24.scope. Jul 2 02:35:29.692882 systemd-logind[1435]: New session 24 of user core. Jul 2 02:35:30.064429 sshd[4078]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:30.067489 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 02:35:30.068186 systemd-logind[1435]: Session 24 logged out. Waiting for processes to exit. Jul 2 02:35:30.068344 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:40134.service: Deactivated successfully. Jul 2 02:35:30.069526 systemd-logind[1435]: Removed session 24. Jul 2 02:35:30.145636 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:40142.service. Jul 2 02:35:30.592810 sshd[4090]: Accepted publickey for core from 10.200.16.10 port 40142 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:30.594549 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:30.598641 systemd-logind[1435]: New session 25 of user core. Jul 2 02:35:30.599154 systemd[1]: Started session-25.scope. Jul 2 02:35:32.388013 systemd[1]: run-containerd-runc-k8s.io-c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85-runc.atvwrv.mount: Deactivated successfully. Jul 2 02:35:32.397650 env[1446]: time="2024-07-02T02:35:32.397608910Z" level=info msg="StopContainer for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" with timeout 30 (s)" Jul 2 02:35:32.398874 env[1446]: time="2024-07-02T02:35:32.398846945Z" level=info msg="Stop container \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" with signal terminated" Jul 2 02:35:32.411303 systemd[1]: cri-containerd-66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387.scope: Deactivated successfully. Jul 2 02:35:32.422359 env[1446]: time="2024-07-02T02:35:32.421675904Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 02:35:32.429226 env[1446]: time="2024-07-02T02:35:32.429192674Z" level=info msg="StopContainer for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" with timeout 2 (s)" Jul 2 02:35:32.429764 env[1446]: time="2024-07-02T02:35:32.429731689Z" level=info msg="Stop container \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" with signal terminated" Jul 2 02:35:32.439480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387-rootfs.mount: Deactivated successfully. Jul 2 02:35:32.448839 systemd-networkd[1606]: lxc_health: Link DOWN Jul 2 02:35:32.448846 systemd-networkd[1606]: lxc_health: Lost carrier Jul 2 02:35:32.467220 systemd[1]: cri-containerd-c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85.scope: Deactivated successfully. Jul 2 02:35:32.467573 systemd[1]: cri-containerd-c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85.scope: Consumed 6.306s CPU time. Jul 2 02:35:32.485558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85-rootfs.mount: Deactivated successfully. Jul 2 02:35:32.497283 env[1446]: time="2024-07-02T02:35:32.497238738Z" level=info msg="shim disconnected" id=66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387 Jul 2 02:35:32.497861 env[1446]: time="2024-07-02T02:35:32.497480265Z" level=warning msg="cleaning up after shim disconnected" id=66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387 namespace=k8s.io Jul 2 02:35:32.497861 env[1446]: time="2024-07-02T02:35:32.497496385Z" level=info msg="cleaning up dead shim" Jul 2 02:35:32.498292 env[1446]: time="2024-07-02T02:35:32.498247806Z" level=info msg="shim disconnected" id=c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85 Jul 2 02:35:32.498354 env[1446]: time="2024-07-02T02:35:32.498296608Z" level=warning msg="cleaning up after shim disconnected" id=c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85 namespace=k8s.io Jul 2 02:35:32.498354 env[1446]: time="2024-07-02T02:35:32.498305168Z" level=info msg="cleaning up dead shim" Jul 2 02:35:32.506954 env[1446]: time="2024-07-02T02:35:32.506912169Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4159 runtime=io.containerd.runc.v2\n" Jul 2 02:35:32.509848 env[1446]: time="2024-07-02T02:35:32.509811890Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4158 runtime=io.containerd.runc.v2\n" Jul 2 02:35:32.511062 env[1446]: time="2024-07-02T02:35:32.511032964Z" level=info msg="StopContainer for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" returns successfully" Jul 2 02:35:32.511876 env[1446]: time="2024-07-02T02:35:32.511851787Z" level=info msg="StopPodSandbox for \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\"" Jul 2 02:35:32.512029 env[1446]: time="2024-07-02T02:35:32.512008551Z" level=info msg="Container to stop \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:32.512093 env[1446]: time="2024-07-02T02:35:32.512078033Z" level=info msg="Container to stop \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:32.512150 env[1446]: time="2024-07-02T02:35:32.512135795Z" level=info msg="Container to stop \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:32.512210 env[1446]: time="2024-07-02T02:35:32.512194757Z" level=info msg="Container to stop \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:32.512268 env[1446]: time="2024-07-02T02:35:32.512252158Z" level=info msg="Container to stop \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:32.517052 systemd[1]: cri-containerd-54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5.scope: Deactivated successfully. Jul 2 02:35:32.522956 env[1446]: time="2024-07-02T02:35:32.522913856Z" level=info msg="StopContainer for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" returns successfully" Jul 2 02:35:32.523607 env[1446]: time="2024-07-02T02:35:32.523584035Z" level=info msg="StopPodSandbox for \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\"" Jul 2 02:35:32.523848 env[1446]: time="2024-07-02T02:35:32.523827082Z" level=info msg="Container to stop \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:32.535674 systemd[1]: cri-containerd-d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27.scope: Deactivated successfully. Jul 2 02:35:32.557287 env[1446]: time="2024-07-02T02:35:32.557233977Z" level=info msg="shim disconnected" id=54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5 Jul 2 02:35:32.558076 env[1446]: time="2024-07-02T02:35:32.558050400Z" level=warning msg="cleaning up after shim disconnected" id=54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5 namespace=k8s.io Jul 2 02:35:32.558167 env[1446]: time="2024-07-02T02:35:32.558153123Z" level=info msg="cleaning up dead shim" Jul 2 02:35:32.568091 env[1446]: time="2024-07-02T02:35:32.568049039Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4219 runtime=io.containerd.runc.v2\n" Jul 2 02:35:32.569247 env[1446]: time="2024-07-02T02:35:32.569219552Z" level=info msg="TearDown network for sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" successfully" Jul 2 02:35:32.569482 env[1446]: time="2024-07-02T02:35:32.569461999Z" level=info msg="StopPodSandbox for \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" returns successfully" Jul 2 02:35:32.569653 env[1446]: time="2024-07-02T02:35:32.569447799Z" level=info msg="shim disconnected" id=d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27 Jul 2 02:35:32.569780 env[1446]: time="2024-07-02T02:35:32.569753687Z" level=warning msg="cleaning up after shim disconnected" id=d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27 namespace=k8s.io Jul 2 02:35:32.569889 env[1446]: time="2024-07-02T02:35:32.569875011Z" level=info msg="cleaning up dead shim" Jul 2 02:35:32.585609 env[1446]: time="2024-07-02T02:35:32.585566770Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4231 runtime=io.containerd.runc.v2\n" Jul 2 02:35:32.586088 env[1446]: time="2024-07-02T02:35:32.586062384Z" level=info msg="TearDown network for sandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" successfully" Jul 2 02:35:32.586198 env[1446]: time="2024-07-02T02:35:32.586180507Z" level=info msg="StopPodSandbox for \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" returns successfully" Jul 2 02:35:32.605377 kubelet[2517]: I0702 02:35:32.604281 2517 scope.go:117] "RemoveContainer" containerID="c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85" Jul 2 02:35:32.611908 env[1446]: time="2024-07-02T02:35:32.611854745Z" level=info msg="RemoveContainer for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\"" Jul 2 02:35:32.623161 env[1446]: time="2024-07-02T02:35:32.623092140Z" level=info msg="RemoveContainer for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" returns successfully" Jul 2 02:35:32.623624 kubelet[2517]: I0702 02:35:32.623595 2517 scope.go:117] "RemoveContainer" containerID="812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe" Jul 2 02:35:32.624756 env[1446]: time="2024-07-02T02:35:32.624729786Z" level=info msg="RemoveContainer for \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\"" Jul 2 02:35:32.631869 env[1446]: time="2024-07-02T02:35:32.631828264Z" level=info msg="RemoveContainer for \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\" returns successfully" Jul 2 02:35:32.632213 kubelet[2517]: I0702 02:35:32.632184 2517 scope.go:117] "RemoveContainer" containerID="51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1" Jul 2 02:35:32.633204 env[1446]: time="2024-07-02T02:35:32.633175182Z" level=info msg="RemoveContainer for \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\"" Jul 2 02:35:32.642095 env[1446]: time="2024-07-02T02:35:32.642011469Z" level=info msg="RemoveContainer for \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\" returns successfully" Jul 2 02:35:32.642484 kubelet[2517]: I0702 02:35:32.642465 2517 scope.go:117] "RemoveContainer" containerID="613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa" Jul 2 02:35:32.644570 env[1446]: time="2024-07-02T02:35:32.644542940Z" level=info msg="RemoveContainer for \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\"" Jul 2 02:35:32.652287 env[1446]: time="2024-07-02T02:35:32.652247596Z" level=info msg="RemoveContainer for \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\" returns successfully" Jul 2 02:35:32.652654 kubelet[2517]: I0702 02:35:32.652636 2517 scope.go:117] "RemoveContainer" containerID="0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405" Jul 2 02:35:32.653869 env[1446]: time="2024-07-02T02:35:32.653837920Z" level=info msg="RemoveContainer for \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\"" Jul 2 02:35:32.660266 env[1446]: time="2024-07-02T02:35:32.660228139Z" level=info msg="RemoveContainer for \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\" returns successfully" Jul 2 02:35:32.660492 kubelet[2517]: I0702 02:35:32.660474 2517 scope.go:117] "RemoveContainer" containerID="c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85" Jul 2 02:35:32.660853 env[1446]: time="2024-07-02T02:35:32.660786155Z" level=error msg="ContainerStatus for \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\": not found" Jul 2 02:35:32.660994 kubelet[2517]: E0702 02:35:32.660971 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\": not found" containerID="c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85" Jul 2 02:35:32.661095 kubelet[2517]: I0702 02:35:32.661077 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85"} err="failed to get container status \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3f7e6da0fc8a19a3e8b2adcb6d33a230cd62f23a9b2484f47fe60cde5216e85\": not found" Jul 2 02:35:32.661147 kubelet[2517]: I0702 02:35:32.661095 2517 scope.go:117] "RemoveContainer" containerID="812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe" Jul 2 02:35:32.661462 env[1446]: time="2024-07-02T02:35:32.661417092Z" level=error msg="ContainerStatus for \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\": not found" Jul 2 02:35:32.661682 kubelet[2517]: E0702 02:35:32.661654 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\": not found" containerID="812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe" Jul 2 02:35:32.661795 kubelet[2517]: I0702 02:35:32.661782 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe"} err="failed to get container status \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"812c293232601a0568e53dad9775e42a08f621cbf91e1c3eba098f1ece0ec2fe\": not found" Jul 2 02:35:32.661874 kubelet[2517]: I0702 02:35:32.661865 2517 scope.go:117] "RemoveContainer" containerID="51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1" Jul 2 02:35:32.662125 env[1446]: time="2024-07-02T02:35:32.662065270Z" level=error msg="ContainerStatus for \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\": not found" Jul 2 02:35:32.662299 kubelet[2517]: E0702 02:35:32.662276 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\": not found" containerID="51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1" Jul 2 02:35:32.662419 kubelet[2517]: I0702 02:35:32.662407 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1"} err="failed to get container status \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\": rpc error: code = NotFound desc = an error occurred when try to find container \"51c04da40129959fd7df64c2b26c8d7b486aef32a8a75fe26297cef6ad5baaa1\": not found" Jul 2 02:35:32.662498 kubelet[2517]: I0702 02:35:32.662488 2517 scope.go:117] "RemoveContainer" containerID="613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa" Jul 2 02:35:32.662839 env[1446]: time="2024-07-02T02:35:32.662797131Z" level=error msg="ContainerStatus for \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\": not found" Jul 2 02:35:32.663106 kubelet[2517]: E0702 02:35:32.663093 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\": not found" containerID="613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa" Jul 2 02:35:32.663218 kubelet[2517]: I0702 02:35:32.663208 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa"} err="failed to get container status \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"613a66718c8d00ea214a9eddb209d745b0cf701de6dc04a3c3b771b34ed6d3fa\": not found" Jul 2 02:35:32.663301 kubelet[2517]: I0702 02:35:32.663291 2517 scope.go:117] "RemoveContainer" containerID="0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405" Jul 2 02:35:32.663618 env[1446]: time="2024-07-02T02:35:32.663558312Z" level=error msg="ContainerStatus for \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\": not found" Jul 2 02:35:32.663810 kubelet[2517]: E0702 02:35:32.663764 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\": not found" containerID="0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405" Jul 2 02:35:32.663914 kubelet[2517]: I0702 02:35:32.663904 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405"} err="failed to get container status \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d64a3a99146f0aeee9aba20fece7544e6b14c8b484c57f8e0ef641484361405\": not found" Jul 2 02:35:32.664057 kubelet[2517]: I0702 02:35:32.664046 2517 scope.go:117] "RemoveContainer" containerID="66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387" Jul 2 02:35:32.665305 kubelet[2517]: I0702 02:35:32.665290 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-cgroup\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.665414 env[1446]: time="2024-07-02T02:35:32.665349762Z" level=info msg="RemoveContainer for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\"" Jul 2 02:35:32.665599 kubelet[2517]: I0702 02:35:32.665585 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/709d54a0-363a-4a60-89e7-314339f722a0-cilium-config-path\") pod \"709d54a0-363a-4a60-89e7-314339f722a0\" (UID: \"709d54a0-363a-4a60-89e7-314339f722a0\") " Jul 2 02:35:32.665810 kubelet[2517]: I0702 02:35:32.665788 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99sg\" (UniqueName: \"kubernetes.io/projected/709d54a0-363a-4a60-89e7-314339f722a0-kube-api-access-l99sg\") pod \"709d54a0-363a-4a60-89e7-314339f722a0\" (UID: \"709d54a0-363a-4a60-89e7-314339f722a0\") " Jul 2 02:35:32.665810 kubelet[2517]: I0702 02:35:32.665820 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-etc-cni-netd\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.665810 kubelet[2517]: I0702 02:35:32.665838 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-bpf-maps\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.665941 kubelet[2517]: I0702 02:35:32.665858 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-config-path\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.667969 kubelet[2517]: I0702 02:35:32.667933 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 02:35:32.668046 kubelet[2517]: I0702 02:35:32.665516 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.668346 kubelet[2517]: I0702 02:35:32.668295 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709d54a0-363a-4a60-89e7-314339f722a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "709d54a0-363a-4a60-89e7-314339f722a0" (UID: "709d54a0-363a-4a60-89e7-314339f722a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 02:35:32.668526 kubelet[2517]: I0702 02:35:32.668456 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.668614 kubelet[2517]: I0702 02:35:32.668474 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.671159 kubelet[2517]: I0702 02:35:32.671125 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709d54a0-363a-4a60-89e7-314339f722a0-kube-api-access-l99sg" (OuterVolumeSpecName: "kube-api-access-l99sg") pod "709d54a0-363a-4a60-89e7-314339f722a0" (UID: "709d54a0-363a-4a60-89e7-314339f722a0"). InnerVolumeSpecName "kube-api-access-l99sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 02:35:32.673947 env[1446]: time="2024-07-02T02:35:32.673905762Z" level=info msg="RemoveContainer for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" returns successfully" Jul 2 02:35:32.674254 kubelet[2517]: I0702 02:35:32.674229 2517 scope.go:117] "RemoveContainer" containerID="66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387" Jul 2 02:35:32.674628 env[1446]: time="2024-07-02T02:35:32.674566260Z" level=error msg="ContainerStatus for \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\": not found" Jul 2 02:35:32.674834 kubelet[2517]: E0702 02:35:32.674820 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\": not found" containerID="66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387" Jul 2 02:35:32.674958 kubelet[2517]: I0702 02:35:32.674943 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387"} err="failed to get container status \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\": rpc error: code = NotFound desc = an error occurred when try to find container \"66c61961a61957de853ba5955425136736a2e0e66d30d860526f32028cf37387\": not found" Jul 2 02:35:32.766253 kubelet[2517]: I0702 02:35:32.766214 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-run\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.766486 kubelet[2517]: I0702 02:35:32.766473 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-hubble-tls\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.766584 kubelet[2517]: I0702 02:35:32.766573 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-lib-modules\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.766703 kubelet[2517]: I0702 02:35:32.766691 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-657ws\" (UniqueName: \"kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-kube-api-access-657ws\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.766784 kubelet[2517]: I0702 02:35:32.766775 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-xtables-lock\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.766870 kubelet[2517]: I0702 02:35:32.766860 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-net\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.766968 kubelet[2517]: I0702 02:35:32.766959 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91ab30d0-fead-4cdd-9688-ace1474008e2-clustermesh-secrets\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.767059 kubelet[2517]: I0702 02:35:32.767049 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-hostproc\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.767158 kubelet[2517]: I0702 02:35:32.767147 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-kernel\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.767248 kubelet[2517]: I0702 02:35:32.767239 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cni-path\") pod \"91ab30d0-fead-4cdd-9688-ace1474008e2\" (UID: \"91ab30d0-fead-4cdd-9688-ace1474008e2\") " Jul 2 02:35:32.767378 kubelet[2517]: I0702 02:35:32.767359 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-cgroup\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.767465 kubelet[2517]: I0702 02:35:32.767454 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/709d54a0-363a-4a60-89e7-314339f722a0-cilium-config-path\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.767544 kubelet[2517]: I0702 02:35:32.767534 2517 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l99sg\" (UniqueName: \"kubernetes.io/projected/709d54a0-363a-4a60-89e7-314339f722a0-kube-api-access-l99sg\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.767621 kubelet[2517]: I0702 02:35:32.767612 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-config-path\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.767720 kubelet[2517]: I0702 02:35:32.767709 2517 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-etc-cni-netd\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.767796 kubelet[2517]: I0702 02:35:32.767787 2517 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-bpf-maps\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.767889 kubelet[2517]: I0702 02:35:32.767876 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.767995 kubelet[2517]: I0702 02:35:32.767981 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.768678 kubelet[2517]: I0702 02:35:32.768653 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.769043 kubelet[2517]: I0702 02:35:32.768998 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.769125 kubelet[2517]: I0702 02:35:32.766266 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.769205 kubelet[2517]: I0702 02:35:32.767160 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.769268 kubelet[2517]: I0702 02:35:32.767178 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:32.769771 kubelet[2517]: I0702 02:35:32.769687 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 02:35:32.772025 kubelet[2517]: I0702 02:35:32.771991 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-kube-api-access-657ws" (OuterVolumeSpecName: "kube-api-access-657ws") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "kube-api-access-657ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 02:35:32.772253 kubelet[2517]: I0702 02:35:32.772234 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ab30d0-fead-4cdd-9688-ace1474008e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "91ab30d0-fead-4cdd-9688-ace1474008e2" (UID: "91ab30d0-fead-4cdd-9688-ace1474008e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 02:35:32.868607 kubelet[2517]: I0702 02:35:32.868564 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cilium-run\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868607 kubelet[2517]: I0702 02:35:32.868598 2517 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-lib-modules\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868607 kubelet[2517]: I0702 02:35:32.868614 2517 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-657ws\" (UniqueName: \"kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-kube-api-access-657ws\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868626 2517 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-xtables-lock\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868638 2517 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-net\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868648 2517 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91ab30d0-fead-4cdd-9688-ace1474008e2-clustermesh-secrets\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868658 2517 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91ab30d0-fead-4cdd-9688-ace1474008e2-hubble-tls\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868670 2517 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-hostproc\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868680 2517 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.868801 kubelet[2517]: I0702 02:35:32.868690 2517 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91ab30d0-fead-4cdd-9688-ace1474008e2-cni-path\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:32.909196 systemd[1]: Removed slice kubepods-burstable-pod91ab30d0_fead_4cdd_9688_ace1474008e2.slice. Jul 2 02:35:32.909292 systemd[1]: kubepods-burstable-pod91ab30d0_fead_4cdd_9688_ace1474008e2.slice: Consumed 6.392s CPU time. Jul 2 02:35:32.915500 systemd[1]: Removed slice kubepods-besteffort-pod709d54a0_363a_4a60_89e7_314339f722a0.slice. Jul 2 02:35:33.380328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27-rootfs.mount: Deactivated successfully. Jul 2 02:35:33.380424 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27-shm.mount: Deactivated successfully. Jul 2 02:35:33.380481 systemd[1]: var-lib-kubelet-pods-709d54a0\x2d363a\x2d4a60\x2d89e7\x2d314339f722a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl99sg.mount: Deactivated successfully. Jul 2 02:35:33.380531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5-rootfs.mount: Deactivated successfully. Jul 2 02:35:33.380583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5-shm.mount: Deactivated successfully. Jul 2 02:35:33.380631 systemd[1]: var-lib-kubelet-pods-91ab30d0\x2dfead\x2d4cdd\x2d9688\x2dace1474008e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d657ws.mount: Deactivated successfully. Jul 2 02:35:33.380679 systemd[1]: var-lib-kubelet-pods-91ab30d0\x2dfead\x2d4cdd\x2d9688\x2dace1474008e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 02:35:33.380725 systemd[1]: var-lib-kubelet-pods-91ab30d0\x2dfead\x2d4cdd\x2d9688\x2dace1474008e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 02:35:34.152240 kubelet[2517]: I0702 02:35:34.152199 2517 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="709d54a0-363a-4a60-89e7-314339f722a0" path="/var/lib/kubelet/pods/709d54a0-363a-4a60-89e7-314339f722a0/volumes" Jul 2 02:35:34.152829 kubelet[2517]: I0702 02:35:34.152801 2517 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" path="/var/lib/kubelet/pods/91ab30d0-fead-4cdd-9688-ace1474008e2/volumes" Jul 2 02:35:34.252762 kubelet[2517]: E0702 02:35:34.252729 2517 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 02:35:34.400509 sshd[4090]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:34.403623 systemd-logind[1435]: Session 25 logged out. Waiting for processes to exit. Jul 2 02:35:34.403759 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 02:35:34.404750 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:40142.service: Deactivated successfully. Jul 2 02:35:34.405618 systemd-logind[1435]: Removed session 25. Jul 2 02:35:34.470910 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:40152.service. Jul 2 02:35:34.889531 sshd[4250]: Accepted publickey for core from 10.200.16.10 port 40152 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:34.890807 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:34.895239 systemd[1]: Started session-26.scope. Jul 2 02:35:34.895595 systemd-logind[1435]: New session 26 of user core. Jul 2 02:35:36.271351 kubelet[2517]: I0702 02:35:36.271305 2517 topology_manager.go:215] "Topology Admit Handler" podUID="9f29135b-e5b3-4e03-834d-9577767a578c" podNamespace="kube-system" podName="cilium-plt99" Jul 2 02:35:36.271755 kubelet[2517]: E0702 02:35:36.271739 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="709d54a0-363a-4a60-89e7-314339f722a0" containerName="cilium-operator" Jul 2 02:35:36.271829 kubelet[2517]: E0702 02:35:36.271820 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" containerName="mount-bpf-fs" Jul 2 02:35:36.271888 kubelet[2517]: E0702 02:35:36.271879 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" containerName="clean-cilium-state" Jul 2 02:35:36.271944 kubelet[2517]: E0702 02:35:36.271935 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" containerName="mount-cgroup" Jul 2 02:35:36.271992 kubelet[2517]: E0702 02:35:36.271984 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" containerName="apply-sysctl-overwrites" Jul 2 02:35:36.272043 kubelet[2517]: E0702 02:35:36.272035 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" containerName="cilium-agent" Jul 2 02:35:36.272117 kubelet[2517]: I0702 02:35:36.272107 2517 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ab30d0-fead-4cdd-9688-ace1474008e2" containerName="cilium-agent" Jul 2 02:35:36.272180 kubelet[2517]: I0702 02:35:36.272171 2517 memory_manager.go:354] "RemoveStaleState removing state" podUID="709d54a0-363a-4a60-89e7-314339f722a0" containerName="cilium-operator" Jul 2 02:35:36.277602 systemd[1]: Created slice kubepods-burstable-pod9f29135b_e5b3_4e03_834d_9577767a578c.slice. Jul 2 02:35:36.285422 sshd[4250]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:36.288928 systemd-logind[1435]: Session 26 logged out. Waiting for processes to exit. Jul 2 02:35:36.289552 kubelet[2517]: I0702 02:35:36.289027 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-hostproc\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289552 kubelet[2517]: I0702 02:35:36.289084 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-etc-cni-netd\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289552 kubelet[2517]: I0702 02:35:36.289108 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-config-path\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289552 kubelet[2517]: I0702 02:35:36.289284 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-bpf-maps\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289552 kubelet[2517]: I0702 02:35:36.289367 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-run\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289552 kubelet[2517]: I0702 02:35:36.289388 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-lib-modules\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289716 kubelet[2517]: I0702 02:35:36.289433 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-kernel\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289716 kubelet[2517]: I0702 02:35:36.289454 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4l2j\" (UniqueName: \"kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-kube-api-access-r4l2j\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289716 kubelet[2517]: I0702 02:35:36.289520 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-cgroup\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289716 kubelet[2517]: I0702 02:35:36.289545 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-ipsec-secrets\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289716 kubelet[2517]: I0702 02:35:36.289593 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-clustermesh-secrets\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289829 kubelet[2517]: I0702 02:35:36.289615 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-net\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289829 kubelet[2517]: I0702 02:35:36.289673 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cni-path\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289829 kubelet[2517]: I0702 02:35:36.289693 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-xtables-lock\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.289829 kubelet[2517]: I0702 02:35:36.289739 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-hubble-tls\") pod \"cilium-plt99\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " pod="kube-system/cilium-plt99" Jul 2 02:35:36.290221 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:40152.service: Deactivated successfully. Jul 2 02:35:36.290960 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 02:35:36.291111 systemd[1]: session-26.scope: Consumed 1.028s CPU time. Jul 2 02:35:36.291788 systemd-logind[1435]: Removed session 26. Jul 2 02:35:36.355359 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:40160.service. Jul 2 02:35:36.581936 env[1446]: time="2024-07-02T02:35:36.581600365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plt99,Uid:9f29135b-e5b3-4e03-834d-9577767a578c,Namespace:kube-system,Attempt:0,}" Jul 2 02:35:36.618547 env[1446]: time="2024-07-02T02:35:36.618477047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:35:36.618547 env[1446]: time="2024-07-02T02:35:36.618516408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:35:36.618756 env[1446]: time="2024-07-02T02:35:36.618527288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:35:36.619006 env[1446]: time="2024-07-02T02:35:36.618958100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34 pid=4275 runtime=io.containerd.runc.v2 Jul 2 02:35:36.636544 systemd[1]: Started cri-containerd-21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34.scope. Jul 2 02:35:36.664619 env[1446]: time="2024-07-02T02:35:36.664577379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plt99,Uid:9f29135b-e5b3-4e03-834d-9577767a578c,Namespace:kube-system,Attempt:0,} returns sandbox id \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\"" Jul 2 02:35:36.670389 env[1446]: time="2024-07-02T02:35:36.670335736Z" level=info msg="CreateContainer within sandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 02:35:36.701420 env[1446]: time="2024-07-02T02:35:36.701371979Z" level=info msg="CreateContainer within sandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\"" Jul 2 02:35:36.702513 env[1446]: time="2024-07-02T02:35:36.702487409Z" level=info msg="StartContainer for \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\"" Jul 2 02:35:36.718516 systemd[1]: Started cri-containerd-a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679.scope. Jul 2 02:35:36.731052 systemd[1]: cri-containerd-a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679.scope: Deactivated successfully. Jul 2 02:35:36.776042 sshd[4261]: Accepted publickey for core from 10.200.16.10 port 40160 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:36.776974 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:36.780662 env[1446]: time="2024-07-02T02:35:36.779981835Z" level=info msg="shim disconnected" id=a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679 Jul 2 02:35:36.780662 env[1446]: time="2024-07-02T02:35:36.780033676Z" level=warning msg="cleaning up after shim disconnected" id=a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679 namespace=k8s.io Jul 2 02:35:36.780662 env[1446]: time="2024-07-02T02:35:36.780043036Z" level=info msg="cleaning up dead shim" Jul 2 02:35:36.782166 systemd[1]: Started session-27.scope. Jul 2 02:35:36.783723 systemd-logind[1435]: New session 27 of user core. Jul 2 02:35:36.791460 env[1446]: time="2024-07-02T02:35:36.791409505Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4333 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T02:35:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 02:35:36.791802 env[1446]: time="2024-07-02T02:35:36.791689913Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Jul 2 02:35:36.792249 env[1446]: time="2024-07-02T02:35:36.792205327Z" level=error msg="Failed to pipe stderr of container \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\"" error="reading from a closed fifo" Jul 2 02:35:36.792392 env[1446]: time="2024-07-02T02:35:36.792209847Z" level=error msg="Failed to pipe stdout of container \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\"" error="reading from a closed fifo" Jul 2 02:35:36.798695 env[1446]: time="2024-07-02T02:35:36.798626381Z" level=error msg="StartContainer for \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 02:35:36.799353 kubelet[2517]: E0702 02:35:36.799025 2517 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679" Jul 2 02:35:36.799353 kubelet[2517]: E0702 02:35:36.799150 2517 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 02:35:36.799353 kubelet[2517]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 02:35:36.799353 kubelet[2517]: rm /hostbin/cilium-mount Jul 2 02:35:36.799569 kubelet[2517]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r4l2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-plt99_kube-system(9f29135b-e5b3-4e03-834d-9577767a578c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 02:35:36.799647 kubelet[2517]: E0702 02:35:36.799189 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-plt99" podUID="9f29135b-e5b3-4e03-834d-9577767a578c" Jul 2 02:35:37.150834 sshd[4261]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:37.153919 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:40160.service: Deactivated successfully. Jul 2 02:35:37.154687 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 02:35:37.156294 systemd-logind[1435]: Session 27 logged out. Waiting for processes to exit. Jul 2 02:35:37.157614 systemd-logind[1435]: Removed session 27. Jul 2 02:35:37.223905 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:40162.service. Jul 2 02:35:37.623103 env[1446]: time="2024-07-02T02:35:37.622990778Z" level=info msg="StopPodSandbox for \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\"" Jul 2 02:35:37.624401 env[1446]: time="2024-07-02T02:35:37.624367095Z" level=info msg="Container to stop \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 02:35:37.626263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34-shm.mount: Deactivated successfully. Jul 2 02:35:37.637211 systemd[1]: cri-containerd-21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34.scope: Deactivated successfully. Jul 2 02:35:37.642824 sshd[4357]: Accepted publickey for core from 10.200.16.10 port 40162 ssh2: RSA SHA256:I/YdyjcuZ5Fsu3YYuMkl0R6I+4bywGkiBXqUH7a1KBg Jul 2 02:35:37.646091 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 02:35:37.654850 systemd[1]: Started session-28.scope. Jul 2 02:35:37.655162 systemd-logind[1435]: New session 28 of user core. Jul 2 02:35:37.691906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34-rootfs.mount: Deactivated successfully. Jul 2 02:35:37.711269 env[1446]: time="2024-07-02T02:35:37.711216758Z" level=info msg="shim disconnected" id=21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34 Jul 2 02:35:37.711269 env[1446]: time="2024-07-02T02:35:37.711263839Z" level=warning msg="cleaning up after shim disconnected" id=21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34 namespace=k8s.io Jul 2 02:35:37.711269 env[1446]: time="2024-07-02T02:35:37.711273760Z" level=info msg="cleaning up dead shim" Jul 2 02:35:37.722971 env[1446]: time="2024-07-02T02:35:37.722917834Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4377 runtime=io.containerd.runc.v2\n" Jul 2 02:35:37.723278 env[1446]: time="2024-07-02T02:35:37.723239763Z" level=info msg="TearDown network for sandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" successfully" Jul 2 02:35:37.723278 env[1446]: time="2024-07-02T02:35:37.723268083Z" level=info msg="StopPodSandbox for \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" returns successfully" Jul 2 02:35:37.797002 kubelet[2517]: I0702 02:35:37.796956 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-net\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797002 kubelet[2517]: I0702 02:35:37.797010 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-etc-cni-netd\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797429 kubelet[2517]: I0702 02:35:37.797052 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-config-path\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797429 kubelet[2517]: I0702 02:35:37.797070 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cni-path\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797429 kubelet[2517]: I0702 02:35:37.797086 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-hostproc\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797429 kubelet[2517]: I0702 02:35:37.797102 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-xtables-lock\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797429 kubelet[2517]: I0702 02:35:37.797119 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-lib-modules\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797429 kubelet[2517]: I0702 02:35:37.797135 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-cgroup\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797591 kubelet[2517]: I0702 02:35:37.797152 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-bpf-maps\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797591 kubelet[2517]: I0702 02:35:37.797172 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4l2j\" (UniqueName: \"kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-kube-api-access-r4l2j\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797591 kubelet[2517]: I0702 02:35:37.797191 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-hubble-tls\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797591 kubelet[2517]: I0702 02:35:37.797208 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-run\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797591 kubelet[2517]: I0702 02:35:37.797225 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-kernel\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797591 kubelet[2517]: I0702 02:35:37.797245 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-clustermesh-secrets\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797728 kubelet[2517]: I0702 02:35:37.797265 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-ipsec-secrets\") pod \"9f29135b-e5b3-4e03-834d-9577767a578c\" (UID: \"9f29135b-e5b3-4e03-834d-9577767a578c\") " Jul 2 02:35:37.797728 kubelet[2517]: I0702 02:35:37.797647 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.797728 kubelet[2517]: I0702 02:35:37.797682 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.797728 kubelet[2517]: I0702 02:35:37.797714 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.799343 kubelet[2517]: I0702 02:35:37.798009 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.799343 kubelet[2517]: I0702 02:35:37.798044 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.799617 kubelet[2517]: I0702 02:35:37.799593 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cni-path" (OuterVolumeSpecName: "cni-path") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.799713 kubelet[2517]: I0702 02:35:37.799699 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-hostproc" (OuterVolumeSpecName: "hostproc") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.799807 kubelet[2517]: I0702 02:35:37.799793 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.802424 systemd[1]: var-lib-kubelet-pods-9f29135b\x2de5b3\x2d4e03\x2d834d\x2d9577767a578c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4l2j.mount: Deactivated successfully. Jul 2 02:35:37.803560 kubelet[2517]: I0702 02:35:37.803536 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-kube-api-access-r4l2j" (OuterVolumeSpecName: "kube-api-access-r4l2j") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "kube-api-access-r4l2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 02:35:37.803679 kubelet[2517]: I0702 02:35:37.803663 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.803760 kubelet[2517]: I0702 02:35:37.803746 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 02:35:37.804690 kubelet[2517]: I0702 02:35:37.804666 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 02:35:37.806245 kubelet[2517]: I0702 02:35:37.806219 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 02:35:37.809834 kubelet[2517]: I0702 02:35:37.808442 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 02:35:37.808914 systemd[1]: var-lib-kubelet-pods-9f29135b\x2de5b3\x2d4e03\x2d834d\x2d9577767a578c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 02:35:37.809005 systemd[1]: var-lib-kubelet-pods-9f29135b\x2de5b3\x2d4e03\x2d834d\x2d9577767a578c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 02:35:37.810651 kubelet[2517]: I0702 02:35:37.810615 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9f29135b-e5b3-4e03-834d-9577767a578c" (UID: "9f29135b-e5b3-4e03-834d-9577767a578c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 02:35:37.898018 kubelet[2517]: I0702 02:35:37.897976 2517 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-etc-cni-netd\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898018 kubelet[2517]: I0702 02:35:37.898017 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-config-path\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898018 kubelet[2517]: I0702 02:35:37.898030 2517 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cni-path\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898041 2517 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-hostproc\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898052 2517 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-xtables-lock\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898064 2517 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-lib-modules\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898074 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-cgroup\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898092 2517 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-r4l2j\" (UniqueName: \"kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-kube-api-access-r4l2j\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898102 2517 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-bpf-maps\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898139 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898217 kubelet[2517]: I0702 02:35:37.898148 2517 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9f29135b-e5b3-4e03-834d-9577767a578c-hubble-tls\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898446 kubelet[2517]: I0702 02:35:37.898158 2517 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-cilium-run\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898446 kubelet[2517]: I0702 02:35:37.898168 2517 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898446 kubelet[2517]: I0702 02:35:37.898178 2517 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9f29135b-e5b3-4e03-834d-9577767a578c-clustermesh-secrets\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:37.898446 kubelet[2517]: I0702 02:35:37.898188 2517 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9f29135b-e5b3-4e03-834d-9577767a578c-host-proc-sys-net\") on node \"ci-3510.3.5-a-c92d6bc2c6\" DevicePath \"\"" Jul 2 02:35:38.155479 systemd[1]: Removed slice kubepods-burstable-pod9f29135b_e5b3_4e03_834d_9577767a578c.slice. Jul 2 02:35:38.398052 systemd[1]: var-lib-kubelet-pods-9f29135b\x2de5b3\x2d4e03\x2d834d\x2d9577767a578c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 02:35:38.624425 kubelet[2517]: I0702 02:35:38.624306 2517 scope.go:117] "RemoveContainer" containerID="a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679" Jul 2 02:35:38.628857 env[1446]: time="2024-07-02T02:35:38.628817590Z" level=info msg="RemoveContainer for \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\"" Jul 2 02:35:38.637662 env[1446]: time="2024-07-02T02:35:38.637610545Z" level=info msg="RemoveContainer for \"a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679\" returns successfully" Jul 2 02:35:38.656737 kubelet[2517]: I0702 02:35:38.656675 2517 topology_manager.go:215] "Topology Admit Handler" podUID="aa85f1c0-39d1-484b-8bb6-50df736ef0ab" podNamespace="kube-system" podName="cilium-99722" Jul 2 02:35:38.656737 kubelet[2517]: E0702 02:35:38.656746 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f29135b-e5b3-4e03-834d-9577767a578c" containerName="mount-cgroup" Jul 2 02:35:38.656905 kubelet[2517]: I0702 02:35:38.656770 2517 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f29135b-e5b3-4e03-834d-9577767a578c" containerName="mount-cgroup" Jul 2 02:35:38.662003 systemd[1]: Created slice kubepods-burstable-podaa85f1c0_39d1_484b_8bb6_50df736ef0ab.slice. Jul 2 02:35:38.703384 kubelet[2517]: I0702 02:35:38.703350 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-etc-cni-netd\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703534 kubelet[2517]: I0702 02:35:38.703437 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-clustermesh-secrets\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703534 kubelet[2517]: I0702 02:35:38.703464 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-hostproc\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703534 kubelet[2517]: I0702 02:35:38.703514 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-host-proc-sys-net\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703534 kubelet[2517]: I0702 02:35:38.703533 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-cilium-run\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703631 kubelet[2517]: I0702 02:35:38.703593 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-cilium-config-path\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703631 kubelet[2517]: I0702 02:35:38.703615 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-host-proc-sys-kernel\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703679 kubelet[2517]: I0702 02:35:38.703669 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl9mx\" (UniqueName: \"kubernetes.io/projected/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-kube-api-access-xl9mx\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703704 kubelet[2517]: I0702 02:35:38.703692 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-lib-modules\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703761 kubelet[2517]: I0702 02:35:38.703743 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-cilium-ipsec-secrets\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703795 kubelet[2517]: I0702 02:35:38.703773 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-xtables-lock\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703842 kubelet[2517]: I0702 02:35:38.703828 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-hubble-tls\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703872 kubelet[2517]: I0702 02:35:38.703855 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-cilium-cgroup\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703922 kubelet[2517]: I0702 02:35:38.703907 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-bpf-maps\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.703956 kubelet[2517]: I0702 02:35:38.703932 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa85f1c0-39d1-484b-8bb6-50df736ef0ab-cni-path\") pod \"cilium-99722\" (UID: \"aa85f1c0-39d1-484b-8bb6-50df736ef0ab\") " pod="kube-system/cilium-99722" Jul 2 02:35:38.804000 kubelet[2517]: I0702 02:35:38.803973 2517 setters.go:568] "Node became not ready" node="ci-3510.3.5-a-c92d6bc2c6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T02:35:38Z","lastTransitionTime":"2024-07-02T02:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 02:35:38.965469 env[1446]: time="2024-07-02T02:35:38.965424365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99722,Uid:aa85f1c0-39d1-484b-8bb6-50df736ef0ab,Namespace:kube-system,Attempt:0,}" Jul 2 02:35:38.992284 env[1446]: time="2024-07-02T02:35:38.992200682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 02:35:38.992284 env[1446]: time="2024-07-02T02:35:38.992242323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 02:35:38.992284 env[1446]: time="2024-07-02T02:35:38.992259004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 02:35:38.992676 env[1446]: time="2024-07-02T02:35:38.992640814Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956 pid=4411 runtime=io.containerd.runc.v2 Jul 2 02:35:39.002770 systemd[1]: Started cri-containerd-6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956.scope. Jul 2 02:35:39.026125 env[1446]: time="2024-07-02T02:35:39.026079145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99722,Uid:aa85f1c0-39d1-484b-8bb6-50df736ef0ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\"" Jul 2 02:35:39.029434 env[1446]: time="2024-07-02T02:35:39.029394953Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 02:35:39.057859 env[1446]: time="2024-07-02T02:35:39.057796389Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663\"" Jul 2 02:35:39.060241 env[1446]: time="2024-07-02T02:35:39.060205213Z" level=info msg="StartContainer for \"9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663\"" Jul 2 02:35:39.074770 systemd[1]: Started cri-containerd-9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663.scope. Jul 2 02:35:39.108117 env[1446]: time="2024-07-02T02:35:39.108071206Z" level=info msg="StartContainer for \"9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663\" returns successfully" Jul 2 02:35:39.112776 systemd[1]: cri-containerd-9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663.scope: Deactivated successfully. Jul 2 02:35:39.176874 env[1446]: time="2024-07-02T02:35:39.176827074Z" level=info msg="shim disconnected" id=9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663 Jul 2 02:35:39.177152 env[1446]: time="2024-07-02T02:35:39.177114682Z" level=warning msg="cleaning up after shim disconnected" id=9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663 namespace=k8s.io Jul 2 02:35:39.177223 env[1446]: time="2024-07-02T02:35:39.177209444Z" level=info msg="cleaning up dead shim" Jul 2 02:35:39.185858 env[1446]: time="2024-07-02T02:35:39.185819353Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4495 runtime=io.containerd.runc.v2\n" Jul 2 02:35:39.254067 kubelet[2517]: E0702 02:35:39.253955 2517 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 02:35:39.630675 env[1446]: time="2024-07-02T02:35:39.630629982Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 02:35:39.664906 env[1446]: time="2024-07-02T02:35:39.664864253Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763\"" Jul 2 02:35:39.665671 env[1446]: time="2024-07-02T02:35:39.665648034Z" level=info msg="StartContainer for \"027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763\"" Jul 2 02:35:39.684106 systemd[1]: Started cri-containerd-027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763.scope. Jul 2 02:35:39.715110 env[1446]: time="2024-07-02T02:35:39.715069828Z" level=info msg="StartContainer for \"027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763\" returns successfully" Jul 2 02:35:39.717368 systemd[1]: cri-containerd-027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763.scope: Deactivated successfully. Jul 2 02:35:39.743892 env[1446]: time="2024-07-02T02:35:39.743849753Z" level=info msg="shim disconnected" id=027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763 Jul 2 02:35:39.744117 env[1446]: time="2024-07-02T02:35:39.744098040Z" level=warning msg="cleaning up after shim disconnected" id=027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763 namespace=k8s.io Jul 2 02:35:39.744178 env[1446]: time="2024-07-02T02:35:39.744165842Z" level=info msg="cleaning up dead shim" Jul 2 02:35:39.751190 env[1446]: time="2024-07-02T02:35:39.751155067Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4557 runtime=io.containerd.runc.v2\n" Jul 2 02:35:39.886474 kubelet[2517]: W0702 02:35:39.886359 2517 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f29135b_e5b3_4e03_834d_9577767a578c.slice/cri-containerd-a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679.scope WatchSource:0}: container "a0140d39b221e96f9d935ce6a0ff081125c48659358836d5bde9eae8788e7679" in namespace "k8s.io": not found Jul 2 02:35:40.152190 kubelet[2517]: I0702 02:35:40.152112 2517 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9f29135b-e5b3-4e03-834d-9577767a578c" path="/var/lib/kubelet/pods/9f29135b-e5b3-4e03-834d-9577767a578c/volumes" Jul 2 02:35:40.398306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763-rootfs.mount: Deactivated successfully. Jul 2 02:35:40.633460 env[1446]: time="2024-07-02T02:35:40.633414292Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 02:35:40.655843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279942741.mount: Deactivated successfully. Jul 2 02:35:40.661551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887226565.mount: Deactivated successfully. Jul 2 02:35:40.669389 env[1446]: time="2024-07-02T02:35:40.669287919Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283\"" Jul 2 02:35:40.671653 env[1446]: time="2024-07-02T02:35:40.671620901Z" level=info msg="StartContainer for \"3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283\"" Jul 2 02:35:40.691241 systemd[1]: Started cri-containerd-3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283.scope. Jul 2 02:35:40.722319 systemd[1]: cri-containerd-3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283.scope: Deactivated successfully. Jul 2 02:35:40.724170 env[1446]: time="2024-07-02T02:35:40.724113327Z" level=info msg="StartContainer for \"3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283\" returns successfully" Jul 2 02:35:40.753974 env[1446]: time="2024-07-02T02:35:40.753930995Z" level=info msg="shim disconnected" id=3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283 Jul 2 02:35:40.754254 env[1446]: time="2024-07-02T02:35:40.754225802Z" level=warning msg="cleaning up after shim disconnected" id=3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283 namespace=k8s.io Jul 2 02:35:40.754370 env[1446]: time="2024-07-02T02:35:40.754354646Z" level=info msg="cleaning up dead shim" Jul 2 02:35:40.761497 env[1446]: time="2024-07-02T02:35:40.761462114Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4615 runtime=io.containerd.runc.v2\n" Jul 2 02:35:41.641365 env[1446]: time="2024-07-02T02:35:41.641329511Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 02:35:41.668142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658238672.mount: Deactivated successfully. Jul 2 02:35:41.680444 env[1446]: time="2024-07-02T02:35:41.680396096Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0\"" Jul 2 02:35:41.681255 env[1446]: time="2024-07-02T02:35:41.681224478Z" level=info msg="StartContainer for \"df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0\"" Jul 2 02:35:41.699851 systemd[1]: Started cri-containerd-df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0.scope. Jul 2 02:35:41.728396 systemd[1]: cri-containerd-df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0.scope: Deactivated successfully. Jul 2 02:35:41.732286 env[1446]: time="2024-07-02T02:35:41.732240815Z" level=info msg="StartContainer for \"df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0\" returns successfully" Jul 2 02:35:41.758233 env[1446]: time="2024-07-02T02:35:41.758191056Z" level=info msg="shim disconnected" id=df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0 Jul 2 02:35:41.758502 env[1446]: time="2024-07-02T02:35:41.758481064Z" level=warning msg="cleaning up after shim disconnected" id=df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0 namespace=k8s.io Jul 2 02:35:41.758581 env[1446]: time="2024-07-02T02:35:41.758568626Z" level=info msg="cleaning up dead shim" Jul 2 02:35:41.765366 env[1446]: time="2024-07-02T02:35:41.765333963Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:35:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4672 runtime=io.containerd.runc.v2\n" Jul 2 02:35:42.398543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0-rootfs.mount: Deactivated successfully. Jul 2 02:35:42.641500 env[1446]: time="2024-07-02T02:35:42.641458863Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 02:35:42.681538 env[1446]: time="2024-07-02T02:35:42.681440584Z" level=info msg="CreateContainer within sandbox \"6bea2f4d40720453feb5cc2f947b60e9a621a1c2aa4b58aa9adea6b7ce520956\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553\"" Jul 2 02:35:42.682350 env[1446]: time="2024-07-02T02:35:42.682297047Z" level=info msg="StartContainer for \"c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553\"" Jul 2 02:35:42.702479 systemd[1]: Started cri-containerd-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553.scope. Jul 2 02:35:42.734926 env[1446]: time="2024-07-02T02:35:42.734883416Z" level=info msg="StartContainer for \"c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553\" returns successfully" Jul 2 02:35:42.996039 kubelet[2517]: W0702 02:35:42.995871 2517 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa85f1c0_39d1_484b_8bb6_50df736ef0ab.slice/cri-containerd-9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663.scope WatchSource:0}: task 9385fa45b4649f4c310abcde8fd8278e2635f01e477675800f1ad933e532e663 not found: not found Jul 2 02:35:43.184344 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 02:35:43.398561 systemd[1]: run-containerd-runc-k8s.io-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553-runc.sQxnoI.mount: Deactivated successfully. Jul 2 02:35:43.658584 kubelet[2517]: I0702 02:35:43.658469 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-99722" podStartSLOduration=5.658430272 podStartE2EDuration="5.658430272s" podCreationTimestamp="2024-07-02 02:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 02:35:43.658108024 +0000 UTC m=+229.900638154" watchObservedRunningTime="2024-07-02 02:35:43.658430272 +0000 UTC m=+229.900960442" Jul 2 02:35:44.074846 systemd[1]: run-containerd-runc-k8s.io-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553-runc.mqhseE.mount: Deactivated successfully. Jul 2 02:35:45.718694 systemd-networkd[1606]: lxc_health: Link UP Jul 2 02:35:45.737788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 02:35:45.735227 systemd-networkd[1606]: lxc_health: Gained carrier Jul 2 02:35:46.103108 kubelet[2517]: W0702 02:35:46.102987 2517 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa85f1c0_39d1_484b_8bb6_50df736ef0ab.slice/cri-containerd-027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763.scope WatchSource:0}: task 027dbb3bcf93155967ca91419b5c10de147a89573fd5e1081dce2e574f253763 not found: not found Jul 2 02:35:46.234391 systemd[1]: run-containerd-runc-k8s.io-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553-runc.rIBn5z.mount: Deactivated successfully. Jul 2 02:35:46.882426 systemd-networkd[1606]: lxc_health: Gained IPv6LL Jul 2 02:35:48.394673 systemd[1]: run-containerd-runc-k8s.io-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553-runc.Itr9oT.mount: Deactivated successfully. Jul 2 02:35:49.214795 kubelet[2517]: W0702 02:35:49.214698 2517 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa85f1c0_39d1_484b_8bb6_50df736ef0ab.slice/cri-containerd-3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283.scope WatchSource:0}: task 3ee661b9742083a47f6cd3f1875895a97b9d49203a1f119bc1e7d2ba03bbd283 not found: not found Jul 2 02:35:50.511067 systemd[1]: run-containerd-runc-k8s.io-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553-runc.LsO2Hc.mount: Deactivated successfully. Jul 2 02:35:52.325345 kubelet[2517]: W0702 02:35:52.325296 2517 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa85f1c0_39d1_484b_8bb6_50df736ef0ab.slice/cri-containerd-df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0.scope WatchSource:0}: task df1a0e2d7f3f10c702e2b192bcb130734a1d8358b1867c06aa2e5e87637b0bf0 not found: not found Jul 2 02:35:52.643598 systemd[1]: run-containerd-runc-k8s.io-c5e21216879d5c4e958f936630ce30e3dd9ac0137371242937cf5a5c5e3bb553-runc.zsLACR.mount: Deactivated successfully. Jul 2 02:35:52.754753 sshd[4357]: pam_unix(sshd:session): session closed for user core Jul 2 02:35:52.757361 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:40162.service: Deactivated successfully. Jul 2 02:35:52.758071 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 02:35:52.758590 systemd-logind[1435]: Session 28 logged out. Waiting for processes to exit. Jul 2 02:35:52.759577 systemd-logind[1435]: Removed session 28. Jul 2 02:35:54.173220 env[1446]: time="2024-07-02T02:35:54.173039312Z" level=info msg="StopPodSandbox for \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\"" Jul 2 02:35:54.173220 env[1446]: time="2024-07-02T02:35:54.173127954Z" level=info msg="TearDown network for sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" successfully" Jul 2 02:35:54.173220 env[1446]: time="2024-07-02T02:35:54.173165275Z" level=info msg="StopPodSandbox for \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" returns successfully" Jul 2 02:35:54.176036 env[1446]: time="2024-07-02T02:35:54.175254485Z" level=info msg="RemovePodSandbox for \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\"" Jul 2 02:35:54.176036 env[1446]: time="2024-07-02T02:35:54.175287166Z" level=info msg="Forcibly stopping sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\"" Jul 2 02:35:54.176036 env[1446]: time="2024-07-02T02:35:54.175375168Z" level=info msg="TearDown network for sandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" successfully" Jul 2 02:35:54.183290 env[1446]: time="2024-07-02T02:35:54.183175116Z" level=info msg="RemovePodSandbox \"54f9184d2cae0a10ba6dd5977e15d2bd69ea9ddf50993f59570e1b0d1619a1c5\" returns successfully" Jul 2 02:35:54.183834 env[1446]: time="2024-07-02T02:35:54.183678728Z" level=info msg="StopPodSandbox for \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\"" Jul 2 02:35:54.183834 env[1446]: time="2024-07-02T02:35:54.183747369Z" level=info msg="TearDown network for sandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" successfully" Jul 2 02:35:54.183834 env[1446]: time="2024-07-02T02:35:54.183778650Z" level=info msg="StopPodSandbox for \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" returns successfully" Jul 2 02:35:54.185273 env[1446]: time="2024-07-02T02:35:54.184232021Z" level=info msg="RemovePodSandbox for \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\"" Jul 2 02:35:54.185273 env[1446]: time="2024-07-02T02:35:54.184255622Z" level=info msg="Forcibly stopping sandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\"" Jul 2 02:35:54.185273 env[1446]: time="2024-07-02T02:35:54.184328903Z" level=info msg="TearDown network for sandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" successfully" Jul 2 02:35:54.191043 env[1446]: time="2024-07-02T02:35:54.190967823Z" level=info msg="RemovePodSandbox \"d9c721c062bb7cd81ef1e44c51615958bd05b327311b90454225c4c40ee31c27\" returns successfully" Jul 2 02:35:54.191514 env[1446]: time="2024-07-02T02:35:54.191364833Z" level=info msg="StopPodSandbox for \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\"" Jul 2 02:35:54.191514 env[1446]: time="2024-07-02T02:35:54.191427034Z" level=info msg="TearDown network for sandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" successfully" Jul 2 02:35:54.191514 env[1446]: time="2024-07-02T02:35:54.191452875Z" level=info msg="StopPodSandbox for \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" returns successfully" Jul 2 02:35:54.192197 env[1446]: time="2024-07-02T02:35:54.191744722Z" level=info msg="RemovePodSandbox for \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\"" Jul 2 02:35:54.192197 env[1446]: time="2024-07-02T02:35:54.191768483Z" level=info msg="Forcibly stopping sandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\"" Jul 2 02:35:54.192197 env[1446]: time="2024-07-02T02:35:54.191821004Z" level=info msg="TearDown network for sandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" successfully" Jul 2 02:35:54.197391 env[1446]: time="2024-07-02T02:35:54.197363777Z" level=info msg="RemovePodSandbox \"21d28b04794f7e6ac71b066e2bce17ac04ee647f57aff5a23bef48b1acaa8e34\" returns successfully" Jul 2 02:36:39.307402 systemd[1]: cri-containerd-cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a.scope: Deactivated successfully. Jul 2 02:36:39.307707 systemd[1]: cri-containerd-cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a.scope: Consumed 3.640s CPU time. Jul 2 02:36:39.325767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a-rootfs.mount: Deactivated successfully. Jul 2 02:36:39.337660 env[1446]: time="2024-07-02T02:36:39.337604984Z" level=info msg="shim disconnected" id=cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a Jul 2 02:36:39.337660 env[1446]: time="2024-07-02T02:36:39.337655505Z" level=warning msg="cleaning up after shim disconnected" id=cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a namespace=k8s.io Jul 2 02:36:39.338064 env[1446]: time="2024-07-02T02:36:39.337669105Z" level=info msg="cleaning up dead shim" Jul 2 02:36:39.345304 env[1446]: time="2024-07-02T02:36:39.345257252Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:36:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5360 runtime=io.containerd.runc.v2\n" Jul 2 02:36:39.744981 kubelet[2517]: I0702 02:36:39.744205 2517 scope.go:117] "RemoveContainer" containerID="cc0acad0fd7b4563c9ab8ff2f8624f30b1dca1f43d4b43dacfa146974acf558a" Jul 2 02:36:39.747780 env[1446]: time="2024-07-02T02:36:39.747736582Z" level=info msg="CreateContainer within sandbox \"64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 02:36:39.772956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296894592.mount: Deactivated successfully. Jul 2 02:36:39.779463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863814375.mount: Deactivated successfully. Jul 2 02:36:39.791875 env[1446]: time="2024-07-02T02:36:39.791832274Z" level=info msg="CreateContainer within sandbox \"64524d898729eb8f84c19c231ac0d4b498a368290eff903104da2af397e3abc8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"af7ede4d2252aed785a07830b6b18a9d559ab6d182dc8a20767b3c4d2b86f9b2\"" Jul 2 02:36:39.792634 env[1446]: time="2024-07-02T02:36:39.792604608Z" level=info msg="StartContainer for \"af7ede4d2252aed785a07830b6b18a9d559ab6d182dc8a20767b3c4d2b86f9b2\"" Jul 2 02:36:39.807502 systemd[1]: Started cri-containerd-af7ede4d2252aed785a07830b6b18a9d559ab6d182dc8a20767b3c4d2b86f9b2.scope. Jul 2 02:36:39.850011 env[1446]: time="2024-07-02T02:36:39.849948116Z" level=info msg="StartContainer for \"af7ede4d2252aed785a07830b6b18a9d559ab6d182dc8a20767b3c4d2b86f9b2\" returns successfully" Jul 2 02:36:40.458799 kubelet[2517]: E0702 02:36:40.458520 2517 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 02:36:40.899560 kubelet[2517]: E0702 02:36:40.899522 2517 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.11:55710->10.200.20.10:2379: read: connection timed out" Jul 2 02:36:41.667576 kubelet[2517]: I0702 02:36:41.667527 2517 status_manager.go:853] "Failed to get status for pod" podUID="663c486df4f058777cbd27507371fb41" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-c92d6bc2c6" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.11:55614->10.200.20.10:2379: read: connection timed out" Jul 2 02:36:44.157251 kubelet[2517]: E0702 02:36:44.157208 2517 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-3510.3.5-a-c92d6bc2c6.17de44d61ffeb84b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3510.3.5-a-c92d6bc2c6,UID:9e4728c1d3c3744f633a965b67c79a05,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3510.3.5-a-c92d6bc2c6,},FirstTimestamp:2024-07-02 02:36:34.152585291 +0000 UTC m=+280.395115461,LastTimestamp:2024-07-02 02:36:34.152585291 +0000 UTC m=+280.395115461,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.5-a-c92d6bc2c6,}" Jul 2 02:36:45.817739 update_engine[1437]: I0702 02:36:45.817696 1437 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 02:36:45.817739 update_engine[1437]: I0702 02:36:45.817735 1437 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 02:36:45.818080 update_engine[1437]: I0702 02:36:45.817862 1437 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 02:36:45.818225 update_engine[1437]: I0702 02:36:45.818198 1437 omaha_request_params.cc:62] Current group set to lts Jul 2 02:36:45.818430 update_engine[1437]: I0702 02:36:45.818297 1437 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 02:36:45.818430 update_engine[1437]: I0702 02:36:45.818305 1437 update_attempter.cc:643] Scheduling an action processor start. Jul 2 02:36:45.818430 update_engine[1437]: I0702 02:36:45.818352 1437 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 02:36:45.818430 update_engine[1437]: I0702 02:36:45.818377 1437 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 02:36:45.818661 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 02:36:45.819055 update_engine[1437]: I0702 02:36:45.819033 1437 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 02:36:45.819055 update_engine[1437]: I0702 02:36:45.819051 1437 omaha_request_action.cc:271] Request: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: Jul 2 02:36:45.819055 update_engine[1437]: I0702 02:36:45.819057 1437 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 02:36:45.969831 update_engine[1437]: I0702 02:36:45.969795 1437 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 02:36:45.970069 update_engine[1437]: I0702 02:36:45.970047 1437 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 02:36:45.992481 systemd[1]: cri-containerd-379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b.scope: Deactivated successfully. Jul 2 02:36:45.992793 systemd[1]: cri-containerd-379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b.scope: Consumed 2.468s CPU time. Jul 2 02:36:46.011935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b-rootfs.mount: Deactivated successfully. Jul 2 02:36:46.033203 env[1446]: time="2024-07-02T02:36:46.033161394Z" level=info msg="shim disconnected" id=379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b Jul 2 02:36:46.033786 env[1446]: time="2024-07-02T02:36:46.033759565Z" level=warning msg="cleaning up after shim disconnected" id=379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b namespace=k8s.io Jul 2 02:36:46.033888 env[1446]: time="2024-07-02T02:36:46.033874527Z" level=info msg="cleaning up dead shim" Jul 2 02:36:46.040209 env[1446]: time="2024-07-02T02:36:46.040177246Z" level=warning msg="cleanup warnings time=\"2024-07-02T02:36:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5422 runtime=io.containerd.runc.v2\n" Jul 2 02:36:46.106051 update_engine[1437]: E0702 02:36:46.105819 1437 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 02:36:46.106051 update_engine[1437]: I0702 02:36:46.105930 1437 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 02:36:46.760824 kubelet[2517]: I0702 02:36:46.760795 2517 scope.go:117] "RemoveContainer" containerID="379d783cdaa96ca12052ba7857c69d8969fbe4c907ffcabc75427229db6e3f6b" Jul 2 02:36:46.762670 env[1446]: time="2024-07-02T02:36:46.762601564Z" level=info msg="CreateContainer within sandbox \"3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 02:36:46.783021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194126458.mount: Deactivated successfully. Jul 2 02:36:46.789186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274695087.mount: Deactivated successfully. Jul 2 02:36:46.801847 env[1446]: time="2024-07-02T02:36:46.801801741Z" level=info msg="CreateContainer within sandbox \"3e32dc77847cca519fd7d10bc19d3ceeea09852ba2b44eed4777c45a943f0594\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f27eac47ae194d1c3540c3c3fe22297821d78b22d068ed7849e51b50fb82de31\"" Jul 2 02:36:46.802507 env[1446]: time="2024-07-02T02:36:46.802474314Z" level=info msg="StartContainer for \"f27eac47ae194d1c3540c3c3fe22297821d78b22d068ed7849e51b50fb82de31\"" Jul 2 02:36:46.816490 systemd[1]: Started cri-containerd-f27eac47ae194d1c3540c3c3fe22297821d78b22d068ed7849e51b50fb82de31.scope. Jul 2 02:36:46.851485 env[1446]: time="2024-07-02T02:36:46.851428476Z" level=info msg="StartContainer for \"f27eac47ae194d1c3540c3c3fe22297821d78b22d068ed7849e51b50fb82de31\" returns successfully" Jul 2 02:36:50.900148 kubelet[2517]: E0702 02:36:50.900112 2517 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 02:36:56.821077 update_engine[1437]: I0702 02:36:56.820605 1437 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 02:36:56.821077 update_engine[1437]: I0702 02:36:56.820858 1437 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 02:36:56.821077 update_engine[1437]: I0702 02:36:56.821049 1437 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 02:36:56.898743 update_engine[1437]: E0702 02:36:56.898707 1437 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 02:36:56.898871 update_engine[1437]: I0702 02:36:56.898811 1437 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 02:37:00.901113 kubelet[2517]: E0702 02:37:00.901082 2517 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-c92d6bc2c6?timeout=10s\": context deadline exceeded" Jul 2 02:37:01.723446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#119 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.723779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#123 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.730962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.739815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.747923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.755394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#121 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.762452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#118 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.769873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#117 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.791729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#117 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.791994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#118 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.798962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#121 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.806322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.813449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.820563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.827642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#123 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.834883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#119 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:01.850253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.035905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#191 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.036960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#119 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#123 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#191 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.037948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#116 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.038049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.038151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#122 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.038250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#121 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.038369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#118 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.038464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#117 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.038567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.044096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.051534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.058848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.066566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.073939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.081199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.088712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#191 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.096406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.103769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.111299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.118677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.125989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.133633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.140911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.148280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.155499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.162909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.171372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#138 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.178865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#139 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.187492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.195433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.204034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.214515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.230077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.230320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.238301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.245933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.253878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.261731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.269441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#191 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.277182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.285287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.293105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.300702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.309037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.316609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.324455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.332061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.340168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.347950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.355717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#139 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.363624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#138 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.371670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.379288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.387182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.394909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.402706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#144 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.410289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.418159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#146 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.429417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.433526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.443117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.449081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.457020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.464909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.472520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.480442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.487987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.496209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.503724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.511357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.519297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.527126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.534728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.542632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.551822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#163 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.560593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#164 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.569924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.578196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#165 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.586115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#167 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.593819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.601370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#168 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.609175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.617093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#172 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.625147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#171 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.649048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#184 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.649280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#187 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.649426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.664275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#188 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.664511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.672055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#189 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.679547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#190 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.687275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#191 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.695483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.705180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#128 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.713788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#130 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.722401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.731031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.740137 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.750584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.759944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.769100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.777516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#139 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.785237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.794833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#138 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.801464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.809378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#141 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.817676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.825936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#144 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.834104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.842366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.850289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#146 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.858122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.866000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.874084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.882177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.891449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.899005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.906926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.914720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.922795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.930674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.939471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.948026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.956508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.965659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.974199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.983264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:02.992810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#163 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.000834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#164 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.008930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.016907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#165 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.024545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#167 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.032255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#169 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.040093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#168 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.048297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#170 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.056509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#172 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.064569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#173 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.072592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#171 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.080973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#174 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.088992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#175 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.097041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#176 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.104826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#177 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.112687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#179 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.120764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#178 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.128780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#180 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.136658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#181 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.145361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#182 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.153569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#183 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.162596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#208 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.172038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#207 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.181238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#209 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.189956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#211 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.198585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#210 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.207248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.215854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#213 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.224881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#214 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.233241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#215 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.241922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#216 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.250688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#119 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.259712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#217 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.267849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#218 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.275778 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#123 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 02:37:03.284195 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#219 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001